The Indian government has directed large-scale language model (LLM) developers backed by the IndiaAI Mission to fix bias in their AI models, reports the Economic Times. Officials from the Ministry of Electronics and Information Technology (MeitY) told the publication that given the country's unique diversity and society, LLM developers need to ensure that basic AI models supported by the government do not produce insensitive results when faced with difficult prompts.AI models can be injected with bias and disparities if they are embedded in the data on which they are trained. Bias mitigation is the process of systematically identifying and mitigating unfair bias in these LLM models. The report quotes MeitY officials as saying, “Sensitive implications related to caste, gender, food habits, regional and linguistic stereotypes, ethnic and religious differences need to be handled with extreme caution.” We want India's model to be inclusive and not discriminatory or based on historical bias. As a result, all LLMs under construction were directed to incorporate rigorous stress testing into their frameworks. ”The report said the bias reduction is part of a global agreement to implement open-access AI tools called the AI Commons. These tools are said to have ethical AI certification, anonymization, and stress testing. Earlier this year, in October, the IndiaAI Mission had invited Expressions of Interest (EOI) for Stress Testing Tools (STT), a project that evaluates AI systems under adverse and extreme conditions, officials pointed out. EOI includes testing models based on “adversarial inputs, data drift, or distribution shifts” and goes beyond typical IT load testing. “Sovereign LLM is an important milestone in our AI journey and should unite the country. The focus should not shift from here to bad actors and criminals who are trying to orchestrate frenzy by giving compromising prompts to AI. Machine learning tools process data at scale, so care must be taken. Even small biases in the original training data can lead to widespread discriminatory outcomes,” another official told ET.
