Most of this scale is built in and for developed countries. Everyone likes to pretend that all this just trickles down and is relevant to the rest of the world.
But in India and much of the Global South, AI will be gated by a boring combination of physics and geopolitics. That means patchy connectivity, unreliable power, high latency, low- and mid-range devices, dizzying cloud costs, and growing concerns about data sovereignty.
For example, when Microsoft cut off service to Nayara Energy last year, India found itself at the mercy of a power company run by a big technology company subject to foreign laws.
This is why the policy debate is starting to shift from “Who has the biggest model?” “Who will make AI widely available?” For the Global South to truly adopt AI en masse, it will need to overcome the formidable hurdles of the three Cs: connectivity, cost, and computing.
have an advantage
The logical path to achieving this is to move intelligence closer to where life happens, rather than sending everything to the cloud. A good way to do this is with edge AI, also known as on-device AI.
In short, edge AI means running AI models directly on devices such as mobile phones, laptops, PoS terminals, cameras, medical equipment, and factory sensors, rather than sending every prompt, image, and transaction to a remote data center for processing.
The edge resides where data is created and as close as possible to the end user.
This isn’t a new idea. Technology has a long history of decentralization. Early computing existed on mainframes. Later, PCs began to perform calculations on desks. Early enterprise software resided on servers. And smartphones put powerful computing in your pocket.
In the early days we had radio (the cloud) and we could listen to what the broadcast stations were transmitting. Then came the Walkman (a device) that allowed you to carry tapes and play them anywhere, regardless of the signal.
In modern times, biometric unlocking is a good analogy. It doesn’t send your fingerprint to a cloud server every time you open your phone. Phone calls are faster, more private, and processed locally. Edge AI applies the same pattern to the latest model.
The benefits are clear and of great importance for India and the Global South.
◆ Lower latency. When the model is run locally, the response is almost instantaneous.
◆ Works even in environments where you cannot connect to the Internet. It’s not just a nice-to-have, it’s the difference between adoption and abandonment in large parts of the country.
◆Raw data remains on the device, improving privacy.
◆ You no longer have to pay for cloud inference on a per-interaction basis, potentially reducing costs because the device performs all or most of the work.
opposite side
Edge AI is not a magic wand and brings real trade-offs. Device memory is limited, accelerating battery usage and making updates difficult. Additionally, edge AI cannot replace the cloud, as the LLM model requires full-fledged data center power.
But balance can be restored. Edge AI is not a new concept, but recent technology has made this rebalancing more realistic.
The breakthroughs we’re seeing today are being driven by three convergence points: better chips, better models, and better math. Many mobile processors now include dedicated neural processing units (NPUs), which are hardware specifically designed to “think” with AI.
At the same time, the industry is embracing small language models (SLMs) like Microsoft’s Phi and Google’s Gemini Nano, recognizing that you don’t need a trillion parameters to be smart. Additionally, techniques such as quantization (compressing a model so that it uses fewer bits) and distillation (training a smaller model to imitate a larger model) can put “enough” intelligence in your pocket.
This is where edge AI starts to be seen as something made for a country like India, with constraints not often seen in developed countries.
inclusive AI
However, constraints are a kind of design superpower, forcing optimization and practicality, and questioning existing norms. Take DeepSeek R3, for example, which was born out of constraints imposed on China’s AI infrastructure.
With smaller models and smarter devices, you can do more. Provide on-device translation and voice assistance in Indian languages for frontline workers. Offline tutoring and personalized learning using low-cost tablets. Real-time visual inspection of gas pipelines. Fraud detection and anomaly alerts on PoS devices with low connectivity settings. In the medical field, triage support tools can be used to guide village nurses even when networks are unreliable. In all of these cases, local processing reduces privacy risks by avoiding uploading raw voice, facial, document, and location traces.
An even smarter way to run AI is a hybrid approach. Let your device handle everyday, frequent, and privacy-sensitive tasks. Escalate to the cloud when you need complex inference, large-scale context, cross-user learning, or rare computations.
A hybrid edge + cloud architecture gives you the best of both worlds: speed and resiliency locally, and depth and scale when you need it.
The best AI is created on the cloud, and the cloud is irreplaceable and extremely important. However, you need to bring edge thinking to your AI strategy and build.
Public procurement must incorporate edges and hybrids, especially in education, health, agriculture, and citizen services. Standards and incentives should encourage privacy-protecting and on-device processing where possible, rather than the default centralization of data.
Language is the last mile to incorporate AI, so India needs to favor local language models that are optimized for device deployment.
We should treat cutting-edge innovation as a national benefit, not a side quest. Because the same solutions that work in Bihar, Kenya, and Indonesia, once proven, often work everywhere.
The upcoming India AI Impact Summit 2026 will clearly include voices from the Global South and is already building on relevant ambitions to democratize AI resources and bridge the divide.
However, true democratization does not come from a declaration. It comes from design choices. AI doesn’t become comprehensive by getting bigger. Being intimate can also be inclusive.
(Views are personal)
