Algorithmic fairness is key to creating responsible artificial intelligence

Machine Learning



More than 30% of the largest companies in the European Union now use artificial intelligence (AI). This trend reflects AI's penetration into many aspects of life, from recruiting employees to selling products to cybersecurity. But behind these algorithms lies one of AI's biggest challenges. prejudiceFind mechanisms to mitigate and achieve prejudice Algorithmic fairness It has become necessary to build a model that is aligned with agreed upon human standards.


“Algorithmic fairness informs the design and development of artificial intelligence (AI) systems, including machine learning (ML) systems. Operated fairly and impartially and without discrimination” he explains. Marco Creatura“The main concern is that AI must not replicate, reinforce, or amplify existing social biases,” said John Myers, data scientist at BBVA AI Factory. “Fairness in this context means: Open-minded“In the field of decision-making, bias is defined as prejudice or favoritism based on innate or acquired characteristics of an individual or group,” he continues.


moreover, Generative AI algorithms are more complex than traditional “machine learning” algorithmswhose output is usually a score or probability. “Large-scale language models are trained on vast amounts of text data, typically sourced from the Internet. This data is often uncurated and may contain stereotypes, misrepresentations, and exclusionary or derogatory language about certain social and marginalized groups. An added layer of complexity is the fact that language is itself a technology that reflects social and cultural norms,” ​​says Clara HigueraData Scientist at BBVA AI Factory. Bias detection in generative AI is an emerging field that is still being researched, but the field is already being applied to creating guardrails and tools to identify these biases.


In fact, according to a UNESCO study, the language models used by generative AI are It can reproduce gender, racial and homophobic prejudices It promotes misinformation.


How does bias, a major obstacle to algorithmic fairness, arise?


As an article in MIT Technology Review points out, biases are diverse and can manifest at different stages.

  1. In the problem definition stageDevelopers start by setting goals for the algorithm they are building. This involves defining metrics from vague and subjective concepts like “effectiveness,” which are open to interpretations that are not always fair. For example, if a streaming content platform's algorithm seeks to maximize viewers' watch time, it may result in recommendations that reinforce previous interests rather than diversifying the viewer's experience with other content.
  2. Data collection in progressThere are two possible reasons for this phenomenon: either the data collected does not reflect reality or it reflects existing biases. For example, if an algorithm receives more photos of light-skinned faces than dark-skinned faces, in the latter case the accuracy of facial recognition will be lower. Another example is the problems that have arisen with some recruiting tools, for example, where women were not hired for technical roles because their algorithms were trained based on historically biased hiring decisions.
  3. Data preparation in progressAttributes that algorithms use to make decisions, such as age or personal history, are often pre-selected and pre-prepared. Such attributes can introduce socio-economic or gender-related biases into AI systems. These biases can also creep in when labeling data that will later be used by the algorithm, especially when performing data annotation tasks. This is because annotators may bring their own biases, which is why it is important to have clear annotation guidelines.

What types of biases should algorithmic fairness address?


Bias exhibited by algorithms comes in many forms and types. “Addressing and mitigating bias is essential to developing AI systems that are fair and beneficial for all users, thereby ensuring unbiased decisions and building trust in emerging technologies,” say Marco Creatura and Clara Higuera. According to Google, types of bias include:

  • Selection biasSelection bias occurs when examples from a dataset are chosen in a way that doesn't reflect a real-world distribution: “Since we cannot train an algorithm on the entire data, we must carefully select a subset based on this context. Samples that are not representative of the whole, or that are biased towards one group, will lead to equally biased results,” explains the Cervantes Institute.
  • Automation BiasThis is the tendency to default to believing everything an automated system reports, regardless of its actual error rate. This leads to information that is sometimes not fully substantiated being taken for granted. “When people have to make decisions in a relatively short time frame, and using sparse information… they tend to rely on the advice given by algorithms,” they point out. Ryan KennedyA professor at the University of Houston specializing in automation wrote in a research paper.
  • Corresponding biasCorrespondence bias occurs when an algorithm generalizes people and evaluates them based on their group membership rather than evaluating their individual characteristics — for example, assuming that everyone who went to the same college is equally qualified for a job.
  • Implicit biasImplicit bias occurs when assumptions are made based on the algorithm developer's own personal circumstances and experiences that don't hold true at a more general level. Developers may inadvertently bring in their own biases, which can affect their approach to modeling and training.

Initiatives and proposals for achieving algorithmic fairness


Many other types of bias may exist in the algorithms that society encounters on a daily basis. However, there are also initiatives and regulations aimed at promoting algorithmic fairness and mitigating unfair situations. “Governments and organizations are beginning to introduce guidelines and regulations to ensure that AI technologies are fair and accountable. This includes: Ethical frameworks and specific laws regarding the use of AI to protect against discrimination” said Marco Creatura, also pointing to the European Union's Artificial Intelligence Regulation (AI Act).


In fact, the European Union is also running a project that provides companies with suggestions and templates for auditing their systems and applications for compliance with regulations. General Data Protection Regulation (GDPR) It also meets various transparency requirements, this way organizations can be sure they are relying on best practices in trust and security, since one of Europe's goals is to ensure that AI works in an inclusive way for everyone.


In Spain, the Spanish Data Protection Agency (AEPD) has published guidelines for the auditing of AI systems, which include: Check for bias in the data sources used“Current research focuses on how to correct for bias in data and models. This includes: More representative data collection techniques, algorithm tweaks, and post-processing changes The analysis of the results will be important to ensure fairer decisions,” says Marco Creatura.


moreover, Fairness Indicators Tools for evaluating machine learning models have emerged, with a focus on making AI models more accurate. Transparency and interdisciplinaritySuch models involve not only data scientists and developers, but also ethicists, sociologists and representatives of affected groups. This can be seen through projects such as: Algorithmic Justice LeagueAn organization founded by Joy BuolamwiniHe is a researcher at the Massachusetts Institute of Technology (MIT) and a pioneer in AI activities. Analyze the various ways in which AI systems may lead to discrimination and inform the public about these risksAs the company notes on its website, while the new tools are promising, it's important to build “a movement to move the AI ​​ecosystem towards fair and accountable AI.”


Techniques designed for generative AI algorithms include: guardrail“these are, Monitor, evaluate, and guide the behavior of generative AI models“These range from instructions within the prompt (such as 'Please answer politely and respectfully, without insulting anyone') to models that detect answers that may contain hate speech that should be avoided,” says Clara Higuera.


Moreover, companies have a key role to play in developing unbiased AI. Indeed, in BBVA’s case, Algorithmic fairness is a priority principle for responsible AIAs such, we are committed to developing and implementing practices that promote fairness and non-discrimination in our artificial intelligence systems across the Group.


Achieving algorithmic fairness is a key challenge that has yet to be resolved: “This is a big problem that we are working on. We need to raise awareness at all levels involved in the AI ​​development cycle, from senior management to data scientists, so that we can develop truly fair systems,” emphasizes Clara Higuera. Research, Regulation and Cooperation Ensuring that algorithms treat everyone fairly helps reduce bias and promote fairness.

This news content was compiled by WebWire editorial staff. Links are permitted.

A news release distribution and press release distribution service provided by WebWire.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *