Can we remove bias from artificial intelligence?

Machine Learning


This article has been reviewed in accordance with Science X's editorial processes and policies. The editors have highlighted the following attributes while ensuring the authenticity of the content:

fact confirmed

Reputable news agency

proofread


AI giants face the challenge of creating artificial intelligence models that reflect the world's diversity without being too politically correct.

× close


AI giants face the challenge of creating artificial intelligence models that reflect the world's diversity without being too politically correct.

Artificial intelligence built on a trove of potentially biased information creates a real risk of automating discrimination, but is there a way to retrain machines?

The question for some is very urgent. In this era of ChatGPT, AI will generate more and more decisions for healthcare providers, bank lenders, or lawyers, using what they glean from the internet as a source of information.

Therefore, the underlying intelligence of an AI is likely to be as good as the world in which it originates, filled with wit, wisdom, and usefulness as well as hatred, prejudice, and abuse.

“This is dangerous because people are accepting and adopting AI software and actually relying on it,” said Joshua Weaver, director of the Texas Opportunity and Justice Incubator, a legal consultancy.

“We can end up in a feedback loop where our own and cultural biases influence the AI's biases, creating a kind of reinforcing loop,” he said.

Making technology more accurately reflect human diversity is not just a political choice.

Other uses of AI, such as facial recognition, have seen companies run into conflict with authorities over discrimination.

According to the Federal Trade Commission, this is a case against Rite Aid, a U.S. pharmacy chain where in-store cameras falsely tagged consumers, particularly women and people of color, as shoplifters.

“That was wrong.”

Experts worry that ChatGPT-style generative AI, which can create anything resembling human-level reasoning in just seconds, presents new opportunities to get things wrong.

Big AI companies are well aware of this problem, and are worried that their models will behave maliciously or be too reflective of Western society as their user bases expand around the world. ing.

“We've had people contact us from as far away as Indonesia and the United States,” Google CEO Sundar Pichai said, explaining why Google strives to ensure image requests from doctors and lawyers reflect racial diversity.

But these considerations can reach levels of irrationality and lead to angry accusations of political correctness gone too far.

This is what happened when Google's Gemini image generator absurdly spit out images of World War II German soldiers that included a black man and an Asian woman.

“The mistake was obviously over-applying it where it shouldn't be. That was a bug and we were wrong,” Pichai said.

But Sasha Luccioni, a research scientist at Hugging Face, a leading AI modeling platform, said: “If you think there's a technical solution to bias, you're already on the wrong track.” he warned.

Generative AI is essentially about whether the output “matches the user's expectations,” which is largely subjective, she said.

The huge model that ChatGPT is built on “cannot infer what is biased and what is unbiased, so we can't do anything about it,” said Alembic Technologies' head of product. , warns Jaden Ziegler.

At least for now, it's up to humans to make sure the AI ​​produces the right things and lives up to their expectations.

“Baked” bias

But given the enthusiasm surrounding AI, that's no easy task.

Approximately 600,000 AI or machine learning models are available on Hugging Face's platform.

“Every few weeks, new models are released and we are scrambling to assess and document bias and undesirable behavior,” Luccioni said.

One technique being developed is called algorithmic disgorgement, which allows engineers to remove content without ruining the entire model.

However, there are serious doubts as to whether this actually works.

Ram Sriharsha, Pinecone's chief technology officer, said an alternative method is to “encourage” the model to move in the right direction, “fine-tune” the model and “reward good and bad.” .

Pinecone is a specialist in search augmentation generation (RAG), a technology in which models obtain information from fixed, authoritative sources.

For Weaver of the Texas Opportunity and Justice Incubator, these “noble” attempts to correct bias are “a projection of our hopes and dreams about what a better future could look like.” .

But bias is “inherent in what it means to be human, so it's also built into AI,” he said.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *