Impact on the Global South

Machine Learning


Flaws in current AI models: Implications for the Global South

The current artificial intelligence (AI) revolution was largely driven by the development of the Transformer model architecture in 2017 and the subsequent creation of large-scale language models (LLMs). Much of the progress in AI since then has rested heavily on LLMs such as generative AI (GenAI), diffusion models, and agent AI. The seemingly impressive advances achieved by these models have led to a number of claims by AI developers and experts, ranging from the possibility of mass layoffs to the prospect of artificial general intelligence (AGI) perhaps in the near future.

However, upon closer inspection, most of these arguments appear to fall apart, with AI adoption and automation witnessing widespread failure across multiple domains and use cases. Additionally, the current hyperscaling model for AI development is becoming increasingly unsustainable as energy and resource requirements continue to increase, which is further exacerbated by the massive debts incurred by AI companies and the hyperscalers pursuing them. This should serve as a wake-up call to the Global South, which is in the process of honing and deploying its own sovereign AI capabilities. In the aftermath of the IndiaAI Impact Summit 2026, these issues further necessitate a reassessment of the Global South’s current development models, highlighting the need to maintain its human-centric roots rather than relying on increasingly AI-centric trends.

Identifying AI use cases and implementation failures

Since GenAI’s creation, multiple company executives have repeatedly asserted that AI will soon automate tasks traditionally performed by humans, especially in areas such as coding and remote labor. However, these claims have been debunked many times. According to a 2025 randomized controlled trial conducted by Model Evaluation and Threat Research (METR), open source programmers using AI took 19% longer to complete tasks than programmers without AI. The Remote Labor Index, developed by the QC Innovation Foundation at IISc Bangalore’s Center for AI Safety and Scale AI, found that virtually all Frontier AI models remain woefully inadequate at automating remote labor tasks, with the best performing model (Opus 4.6) reaching an automation rate of just 4.17 percent.

According to MIT The state of AI in business in 2025 The report states that 95% of GenAI pilot projects have failed.

According to MIT The state of AI in business in 2025 The report states that 95% of GenAI pilot projects have failed. Examples of failed AI implementations include multiple companies, including McDonald’s, DPD, Air Canada, Klarna, and Salesforce, who fired and then rehired employees in favor of AI agents. Different sectors such as fintech, healthcare, education, manufacturing, and government face their own concerns regarding AI adoption. For example, multiple studies from Oxford University and Stanford University point out the dangers of adopting AI chatbots in the medical field. A recent study by the Emergency Medicine Research Institute (ECRI) identified the misuse of AI chatbots as the biggest health technology risk in 2026.

This is exacerbated by the fact that the term “AI” has been deliberately obfuscated to avoid scrutiny and is being used to describe tasks that do not require AI at all. For example, Norwegian technology company 1X launched the world’s first consumer humanoid robot, NEO, in 2025. NEO initially claimed to use AI, but was later found to rely on remote employees to perform certain tasks, potentially violating user privacy while claiming AI automation.

As a result, while AI automation remains popular among AI developers and enthusiasts, it appears to be acting as a cover for austerity measures in some cases. Despite multiple claims to the contrary, rigorous peer-reviewed research on successful AI use cases is very limited, with LLM acting as a poor substitute for humans in most cases, while actively promoting suppressive adjuvant effects in some others.

The unsustainable nature of current AI models

In addition to the aforementioned implementation failures, the current hyperscaling model of AI development is steadily becoming unsustainable due to the enormous energy requirements of data centers, with multiple instances of large-scale power outages, water shortages, and air pollution sparking numerous community protests around the world. For example, data centers will already account for more than 4.4% of annual electricity consumption in the United States in 2023, a figure that has nearly doubled since 2018. Additionally, AI power bottlenecks have caused widespread delays in multiple data center projects, with approximately 11GW of global capacity planned for 2026 remaining “in the announced stage with no sign of construction.”

Figure 1: Global data center capacity addition (in gigawatts) by day of operation

How flaws in the current Ai model affect the Global South

sauce: Axios

On the financial side, most pure AI companies and hyperscalers have large debts due to limited return on investment, leading to increasing claims of circular investment and the imminent bursting of the so-called “AI bubble”. For example, OpenAI had annual revenue of only about USD 20 billion in 2025, despite having over USD 1.4 trillion in funding. The situation is similar for hyperscalers such as CoreWeave, which has annual revenue of just over USD 5 billion in 2025 but plans to spend USD 30-35 billion in 2026.

Misallocation of capital is a common feature of past tech booms such as the dot-com bubble, but the key difference is that much of the infrastructure built was ultimately recoverable even after the bubble burst. In the case of an AI bubble, the large data center infrastructures currently being built will have very limited usefulness once LLM reaches a plateau. But Big Tech companies are now firmly entrenched in a time-consuming and costly hyperscaling paradigm, so they no longer have the option to course-correct.

Why AI adoption fails: The fundamental problem with LLM

One of the main reasons for the current interest and historical investment in large-scale pre-trained models and hyperscaling paradigms lies in the “emerging capabilities” of LLM, especially with respect to plausible inference, leading to widespread speculation that LLM will inevitably evolve into increasingly efficient models, ultimately paving the way for achieving the holy grail of AGI. However, there is evidence to suggest that LLM’s new capabilities are likely the result of inadequate metrics and benchmarks. Furthermore, the increase in benchmark performance with scale in LLM may be due to enhanced pattern memory rather than reasoning or linguistic ability, and is likely to plateau in the future, especially under more sophisticated benchmarks.

According to a survey of 475 experts conducted by the Association for the Advancement of Artificial Intelligence (AAAI), 76% of respondents said that AGI is unlikely to be achieved with current machine learning paradigms. Factuality remains a fundamental limitation in current LLM and GenAI systems, leading to problems such as hallucinations and bias, which undermines the trustworthiness of AI.

According to a survey of 475 experts conducted by the Association for the Advancement of Artificial Intelligence (AAAI), 76% of respondents said that AGI is unlikely to be achieved with current machine learning paradigms.

Approaches to improving facticity include reinforcement learning, search-enhanced generation, and thought-chain reasoning, but future advances in AI are likely to rely on the development of new or hybrid neural network architectures, such as neurosymbolic reasoning systems, and non-neural architectures, such as information lattice learning. However, these alternative paradigms are still in the early stages of development.

This suggests that the current AI paradigm mainly relies on LLM, has systematic and structural deficiencies, and is not suitable for large-scale applications. Therefore, the deployment of AI requires increased scrutiny, especially for use cases that impact critical human sectors.

Conclusion: The case for a human-centered Global South agenda

AI adoption and cooperation in the Global South, especially in the field of human and social development, was the main theme of the IndiaAI Impact Summit 2026. However, the dangers of rushing AI adoption cast serious doubts on this approach. For those who claim to maximize social benefits, the current risks posed by LLM far outweigh the benefits that would result from mass adoption. More than just a matter of hallucinations or fabricated output, social AI applications risk impacting real people and negatively impacting their lives.

This is not to say that AI is of no use to society. There are multiple examples of successful AI utilities. For example, India has seen clear success in deploying chatbots for language translation through platforms such as Bbashini. Research tools like AlphaFold have been so effective in accelerating scientific innovation that the Google DeepMind team won the Nobel Prize in Chemistry in 2024. However, it must be emphasized that while AI can serve as a complementary tool to human capabilities, it is far from replacing human capabilities and will continue to require significant human intervention and oversight. Moreover, the main reason behind the success of such AI use cases is that erroneous outputs do not have significant real-world consequences in these contexts. For example, hallucinatory language translations and ChatGPT responses do not pose a significant threat to an individual’s life. For healthcare chatbots and agricultural assistants, on the other hand, even a fraction of such output can have serious implications.

Rather than simply prioritizing AI adoption, the Global South needs to recalibrate its development agenda to be increasingly AI-focused, around labor rights and human development.

The story of global AI adoption has had the unfortunate effect of gradually reducing the usefulness of humans to the level of inhabiting mere dots on datasets. This is the antithesis of the decades-long pursuit of a human development and inclusion agenda in the Global South. Rushing the adoption of AI due to global peer pressure and fear of missing out could have disastrous consequences for the Global South, putting us at risk of falling victim to clever marketing tactics orchestrated by a few companies. As a result, the Global South needs to recalibrate its development agenda to be increasingly AI-focused, around labor rights and human development, rather than simply prioritizing AI adoption. We need to identify use cases for risk-free AI deployment and identify areas where AI can be most useful while resisting the mass adoption narrative, or at least key areas where negative impacts on humans can be minimized.


Prateek Tripathi He is an Associate Fellow at the Observer Research Foundation’s Center for Security, Strategy, and Technology (CSST).

The above views belong to the author. ORF research and analysis is now available on Telegram. Click here to access our carefully selected content (blogs, long-form articles, interviews).



Source link