Arrow
AI Blog

Do artificial intelligence algorithms manage medical bias

Learn how AI can help minimize bias in medical decision-making, but it's crucial to design algorithms carefully to avoid reinforcing existing prejudices.

The continued emergence of numerous studies and applications of Artificial Intelligence (AI) in the healthcare sector demonstrates that this emerging technology has the great potential to revolutionize the way doctors, nurses, hospitals, pharmaceutical companies, and insurers make decisions to safeguard patients' lives. Incredibly, the creation of these AI-based technology tools has shown a pretty promising future, with estimated market growth from $4.9 billion in 2020 to $45.2 billion in 2026.

However, despite its immense potential, in recent years, the application of AI to the Healthcare sector has raised the concern of some experts, as it has been found that algorithms can exacerbate existing structural disparities in the healthcare system.

It is no secret that biases based on sex, gender, race, socioeconomic status, ethnicity, religion, and even disability have affected the health and biomedical sector since the advent of modern medicine in the 18th century. These prejudices, or biases, have promoted that, as states Londa Schiebinger Professor of the History of Science in the School of Humanities and Sciences at Stanford: "the functioning and anatomy of the white male body has been used as a guide for the discovery of drugs, treatment and medical devices during the last decades of modern medicine".

Worryingly, this bias potentially affects the quality of medical care that some minorities receive in different health systems. To give an example (quite relevant during these last two years of a pandemic), we could analyze the behavior of the pulse oximeter. The pulse oximeter is a biomedical device that records the amount of light absorbed by oxygenated and non-oxygenated red blood cells to quickly and non-invasively analyze a patient's blood oxygen levels. However, when used on patients who have high levels of melanin in their skin (black), they are three times more likely to misreport blood gas levels. In fact, they have also been found to malfunction more often in women than in men, which threatens the ability of both minorities (race and gender) to receive adequate treatment.

Now, perhaps the question that arises at this point is:

How might this bias, or Bias, be transmitted to AI algorithms?

As in any other industry, the AI algorithms developed to help decision-making in the healthcare sector are highly dependent on the data used to train them.
Generally speaking, both machine learning and deep learning algorithms (the most commonly used in the healthcare sector) need a database containing all the relevant information for decision making. This information can be, for example, diagnostic images with a classification of whether or not the patient has a disease, records of biomedical signals with patterns of interest identified over time, the patient's medical history that allows predicting the development of some condition, among others. From this information, the machine goes through a learning process. By viewing the data numerous times, this process can recognize characteristics that allow it to make decisions just as an experienced human would. However, as highlighted above, the data available for training these algorithms are often inherently biased, as it does not consider the wide variability observed when faced with real-life decision-making. Therefore, the machine learns to favor or disfavor a specific population based on demographic, gender, or race characteristics that have no relation to the process to be authorized. Now, being objective, although the primary source of bias is attributed to the data, bias can appear at any stage of algorithm construction. From the approach of the problem to be solved, if the variability of the process is not taken into account, to the conditions of use of the algorithms, if they are implemented in situations for which they were not built.

Therefore, it is essential, particularly in the health sector where an erroneous result has the potential to affect the health of a human being, that those in charge of the development of the algorithms are aware of the presence of this threat.

Can anything be done to mitigate algorithm bias?

To address a threat such as bias in artificial intelligence, the first and most important thing is to recognize that it exists and may be present at different stages of algorithm development. With this clear, various strategies can be put forward that can help reduce its detrimental effects.

First of all, the healthcare professionals collecting the information for the construction of the algorithm must be aware that the biases present in the database structuring process will be transferred to the results obtained when implementing the technological tools. In this sense, the databases must show wide variability in the information collected, ensuring that it is representative of the population in real life and following high-quality standards. In this sense, it is vital to increase investment in public and private entities to construct unbiased databases. So that projects such as the Stanford Skin of Color Project, which seeks to compile the most extensive public dataset of dermatologically relevant images for different skin tones, are replicated in all healthcare entities worldwide. Second, the researchers stress the importance of systematically evaluating AI tools, even after organizations have implemented them. This evaluation is key to understanding how the algorithm performs when confronted with everyday data from the population for which it was created. This means that they must be evaluated using not only traditional accuracy and specificity metrics but also relevant equity metrics as mentioned Henk van Houten, chief technology officer at Royal Philips. Moreover, this also implies that the tools to be built must have a high degree of explainability so that experts can quickly evaluate them.

To achieve this, the efforts of governmental and non-governmental entities to promote the development of prospective studies are of great importance. In this regard, entities such as Arkangel AI have made available to the healthcare community tools such as Hippocrates, allowing algorithms to be built and tested automatically following prospective and retrospective studies models.

Finally, in recent years the importance of making AI accessible to all, a process is known as "democratization," has been highlighted. With this democratization, geographic, gender, race, and class diversity is promoted when building algorithms and defining the necessary regulations for their use. Particularly in the healthcare sector, this process allows healthcare professionals access the knowledge needed to propose and develop AI models. Likewise, with the knowledge in the hands of healthcare personnel, AI modules can begin to be created within universities to raise awareness of its benefits and its threats, such as bias when used.

In Arkangel, we have developed the Hippocrates tool to promote this democratization so that healthcare personnel does not have to worry about the programming behind AI models. Instead, we encourage healthcare professionals to focus on being aware of the theoretical bases of how the models work, from the construction of unbiased databases to the correct implementation and evaluation of the algorithms.

In conclusion, to ensure that AI algorithms used in the future are powerful and fair and generate value for all humans, we must build the technical, regulatory, and economic infrastructure to provide the comprehensive and diverse data needed to train and test these algorithms. While the future of this technology in the healthcare sector is bright, we cannot continue to allow the development or even implementation of tools that may override the ethical principles of medical practice that affect the care patients receive. And not only that, even though the focus of medicine should always be the patient, the consequences of biased algorithms can be felt from legal and financial perspectives, as has happened before.

In the meantime, Arkangel AI will remain committed to raising awareness of the threats that affect the performance of AI algorithms in the healthcare sector, mendicating the development of tools that democratize this technology and promote its use on a daily basis.

If you want to know more about Arkangel Ai software, leave us your (professional information) and one of our agents will contact you to accompany you in a one-on-one onboarding of our technology and advise you on the project you have in mind.

If you want to know more about Arkangel Ai contact us here and one of our team members will contact you for a one-on-one session.

Book a Free Consultation

Trusted by the world's top healthcare institutions