Confronting the Risk of Algorithmic Bias in the AI ​​Healthcare Revolution

AI News


Artificial intelligence (AI) is advancing rapidly and will become an important support tool in clinical care. Research shows that AI algorithms can accurately detect melanoma and predict future breast cancer.

However, before AI can be integrated into everyday clinical use, the challenge of algorithmic bias must be addressed. AI algorithms can contain inherent biases that lead to discrimination and privacy issues. AI systems can also make decisions without the necessary oversight or human input.

One example of AI’s potentially harmful effects comes from an international project aimed at saving lives by using AI to develop breakthrough medicine. In the experiment, the team reversed the “good” AI model to create options for the new AI model to “harm”.

In less than six hours of training, the reverse AI algorithm produced tens of thousands of potential chemical warfare agents, far more dangerous than current warfare agents. This is an extreme example of a compound, but it serves as a wake-up call to assess the known and possibly unrecognized ethical consequences of AI.

AI in clinical care

Medicine deals with people’s most private data and often life-changing decisions. A robust AI ethics framework is essential.

The Australian Epilepsy Project aims to improve people’s lives and make clinical care more widely available. Based on advanced brain imaging, genetic and cognitive information from thousands of epilepsy patients, we plan to use AI to answer currently unanswered questions.

Will this person continue to have seizures? Which drug is the most effective? Is brain surgery a viable therapeutic option? These are the fundamental problems that modern medicine struggles to address.

As the AI ​​lead on this project, my main concern is that the AI ​​moves fast and has minimal regulatory scrutiny. These issues are why we recently established an ethical framework for using AI as a clinical support tool. This framework aims to ensure our AI technology is open, secure and trustworthy while promoting inclusiveness and equity in clinical care.

So how do we implement AI ethics in medicine to reduce bias and keep algorithms in control? The computer science principle “garbage in, garbage out” applies to AI. Suppose you want to collect biased data from a small sample. Our AI algorithms are likely biased and cannot be replicated in another clinical setting.

Examples of bias are not hard to find in modern AI models. Common large-scale language models (such as ChatGPT) and latent diffusion models (DALL-E and Stable Diffusion) show how explicit biases regarding gender, ethnicity, and socioeconomic status arise.

Researchers found that simple user prompts produced images that perpetuated stereotypes of ethnicity, gender, and class. For example, a prompt for a doctor produces images of predominantly male doctors, which is inconsistent with reality, as about half of doctors in OECD countries are female.

Secure implementation of medical AI

Solutions to prevent stigma and discrimination are not easy. Achieving health equity and promoting inclusiveness in clinical research could be one of the key solutions to combat bias in medical AI.

Encouragingly, the US Food and Drug Administration recently proposed making diversity mandatory in clinical trials. This proposal represents a shift towards less biased, community-based clinical research.

Another obstacle to progress is limited research funding. AI algorithms typically require significant amounts of data and can be costly. It is critical to establish enhanced funding mechanisms that provide researchers with the resources they need to collect clinically relevant data suitable for AI applications.

He also argues that we need to stay on top of the inner workings of AI algorithms and understand how they arrive at their conclusions and recommendations. This concept is often called “explainability” in AI. This is related to the idea that humans and machines must work together for optimal results.

We prefer to view the implementation of predictions in models as ‘augmentation’ rather than ‘artificial’ intelligence. Algorithms should be part of the process, and medical professionals should maintain control over the decisions.

It encourages the use of explainable algorithms as well as supports transparent and open science. Scientists should publish AI models and their methodological details for greater transparency and reproducibility.

What does Aotearoa New Zealand need to ensure the safe implementation of AI in medicine? AI ethical concerns are primarily driven by experts in the field. However, targeted AI regulations, such as the EU-based Artificial Intelligence Act, have been proposed to address these ethical considerations.

European AI law is welcomed and protects people working in ‘safe AI’. The UK government recently announced a proactive approach to AI regulation. This will serve as a blueprint for other government responses to AI safety.

At Aotearoa, we advocate adopting a proactive rather than a passive attitude towards AI safety. Establish an ethical framework for the use of AI in clinical care and other areas to produce AI that is interpretable, safe, and unbiased. As a result, our confidence increases that this powerful technology will benefit society while protecting it from harm.

conversation

Mangor Pedersen is funded by the New Zealand Health Research Council and the Australian Fund for the Future of Medical Research.

/ Conversation courtesy. This material from the original organization/author may be of a point-in-time nature and has been edited for clarity, style, and length. and do not take a stand. All views, positions and conclusions expressed herein are solely those of the author.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *