AI technology exhibits cultural bias.We explain why and what you can do about it

Machine Learning


This article has been reviewed in accordance with Science X's editorial processes and policies. The editors have highlighted the following attributes while ensuring the authenticity of the content:

fact confirmed

trusted sources

proofread


Credit: Pixabay/CC0 Public Domain

× close


Credit: Pixabay/CC0 Public Domain

Professor Kevin Wong, an AI expert at Murdoch University's School of Information Technology, says it is important to understand the fundamentals of different AI techniques to address the issue of cultural bias in AI.

“Machine learning techniques, including generative AI, require vast amounts of ‘representative’ data to train complex systems,” Dr. Wong said.

“Data-driven machine learning techniques rely on data to establish a system's intelligence, meaning that bias can occur if the data used is not comprehensive enough or has an unbalanced distribution. There is a possibility.”

He said that while many large technology companies are working to ensure that the data used to train generative AI addresses equity, diversity, and ethical issues, if not properly handled, the technology will not function properly. may still be unpredictable, he said.

Some publicly accessible AI systems have been noted to be unable to generate images of interracial couples, a symptom of a larger problem.

Professor Wong said a “comprehensive assessment and testing strategy” is needed.

Driving system-wide change will be long-term, comprehensive evaluations to build larger databases and improve AI architectures, but Professor Wong believes there are strategies to address such issues. said.

These include incorporating other AI techniques that humans can better control and understand, such as Explainable AI and Interpretable AI.

These are systems that ensure that humans retain intelligent oversight and that the decisions and answers given by AI are predictable.

This is unlike other forms of AI, where even the designers cannot explain some of the results.

Professor Wong said responsible AI – a kind of “rulebook” of AI principles to guide development – is another important emerging area in systems development.

“There is no single simple solution that will solve this problem overnight. Tackle such a complex problem may require the use of a multidimensional and hierarchical approach.

“The question is how to best tune AI systems developed to address sensitive issues such as culture, diversity, equity, privacy, and ethics, which influence user acceptance. It's an important area,” Professor Wong said.

“If some parameters and datasets are adjusted to include handling these broader issues, is there a systematic way to fully test an AI system before deploying it without hurting anyone? ?”

Professor Wong said that despite current issues with diversity and AI, AI could be a powerful tool to “bridge the equity and diversity gap” if used correctly.

“It is important to develop a general system according to some rules and ethical considerations and adapt it to the needs of different cultures and individuals,” he said.

“However, some results may cause sensitive and fragile feelings in some people around the world, so thorough testing and evaluation is essential before widespread use.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *