
(Illustration: iStock/Cemile Bingol)
Discussions about artificial intelligence tend to oscillate between techno-utopianism and existential fear, between simply asking what AI can do and how to contain it. But there are deeper questions we need to ask. What kind of world is AI helping to build, and who will it help?
AI is neither a neutral tool nor an inevitable force. As currently developed and deployed, this is more of an ideological project than a technical system. For this reason, we see little point in discussing Whether or not
Given speculation about the technical limitations of current architectures, profound AI-driven sociological changes will occur. The first thing that must be addressed is the social impact of these restrictions.
But, as Stuart Russell warned in his 2021 Leith Lecture, “Living with Artificial Intelligence,” doing so will be particularly difficult if we assume that AI is inevitable. As he argued, lethal and autonomous weapons are not just battlefield innovations, but represent a broader symbolic crisis of governance without reciprocity, context, or deliberation. But considering AI moves at the speed of light and is impossible to stop, it will disorient public institutions and make it difficult to even ask if this is what anyone wants. If the pace is predetermined, politics becomes mere performance rather than true democratic participation.
Similarly, AI neutral Technology similarly effectively separates it from political scrutiny, obscuring moral choices as if they were purely procedural or technical. And just as markets often outsource problems of value to price mechanisms, AI outsources those moral and social problems to problems of optimization. But the apparent neutrality of AI is not the absence of values, but rather the predominance of hidden values embedded in AI systems that reflect particular priorities, even if they pretend to be objective. As Michael Sandel states in his 2009 book Justice: What is the right thing to do?attempts to avoid explicit moral reasoning often result in the importation of important moral choices without public scrutiny. When AI inherits this attitude, questions of fairness, harm, and recognition are reframed as questions of codes and metrics, insulated from democratic oversight. And just as technologies such as AI are profoundly shaped by the societies that produce them, so too will those societies be shaped by what AI optimizes, who it empowers, and the ways in which we live. What we are witnessing is not the rise of machine intelligence, but the consolidation of its extractive logic under the mask of innovation.
Are you enjoying this article? Read more like this, and more SSIR Subscribe to get the complete archive of content.
Another framework to reconsider is the “general” aspect of AI. This is how, consciously or unconsciously, a variety of different technologies blur together as “AI” (although in the case of “AGI” or “super AI”, the definition changes so much that it becomes difficult to do anything else). But the more common a technology is, the harder it is to regulate or even evaluate, and the easier it is to simply assume its inevitability.
As Russell says, the moment we begin to understand the context and start paving the way to asking very clear and obvious questions, it becomes possible to see (and scrutinize) the changes that AI is actually bringing about.
world of scale
AI requires massive concentrations of capital, data, computing, and control, which is often justified by reference to economies of scale, a concept inherited from industrial production. The assumption is that efficiency increases as size increases, and that only monopolies or near-monopoly firms are considered viable. However, this logic is flawed when applied to digital systems. Not all inputs are scaled equally. Coordination costs, context sensitivity, and adaptation to local variations often increase with scale. What the current AI industry reflects is not a natural outcome of efficiency, but rather a strategic convergence based on the belief that a few universal solutions are preferable to solutions in many places. In this sense, it reflects the idea that a single dominant model for all cancers is considered preferable to a portfolio of specific treatments that accounts for variation. This logic rewards centralization even when the underlying conditions require diversity.
The ideology of scale is not unique to AI. However, AI is already being developed by a group of companies with intensive global deployments. After all, you'd be hard-pressed to find a tool that's as ubiquitous around the world as Excel, WhatsApp, and Google Maps. How can we all use such a limited solution? Who is it good for? Are widely successful tools successful because they are better, or because they are forced to do so?
After all, in 2025, the frontier AI landscape remains highly concentrated in the hands of a few companies. Together, Microsoft, Google, Amazon, and Meta account for the majority of large-scale AI model development, data infrastructure, and training execution. (According to the 2025 AI Index, more than 70% of the most computationally intensive models were trained within these companies or their direct affiliates.) This focus extends beyond model development to access to computing resources, determining who gets to experiment, iterate, and scale.
However, open source efforts exist and their viability shows that a proprietary approach to AI development is not necessary. These models demonstrate that both small and large-scale approaches, public and private ecosystems, and centralized and federated infrastructures can be sustained. However, its influence remains limited by deeper infrastructure asymmetries, particularly in access to data hosting and large-scale computing. These hidden layers create capture loops that support broader power consolidation and narrow the horizons of multiple AI developments.
In this sense, the logic of scale is not neutral. This is a design principle that strengthens decision-making power and limits diversity.
A world of consistency rather than clarity
AI prioritizes consistency over clarity. Being correlation-based, it optimizes performance metrics such as accuracy, efficiency, and fluency (erodes common sense, contextual reasoning, and depth of interpretation). As a result, GPT-based models may outperform humans on benchmark tests, but may fail basic tests of moral reasoning and ambiguity management. Already, there is pressure in law and education to adapt to AI by shifting the role of humans from deliberation to rapid design.
But this reversal is important. The ability to interpret, disagree, and suspend judgment is essential to a pluralistic society. But its impact goes beyond recognition. When a system rewards simulation over understanding, meaning begins to disappear, and when meaning collapses, so does self. As Mr. Ma Jian looked back, red dustabout the breakdown of marriages after Mao's reforms, when people are losing their sense of self, relationships are only a temporary distraction from the emptiness of the heart, and they tend to collapse at the first obstacle. Marr's point was not simply an observation about intimacy, but about the dissolution of shared meaning as relationships held together by convenience rather than recognition lose depth. A realm of empty bonds emerges, and with it a society from which it has been cast out.
From agglomeration to atomization
José Ortega y Gasset once warned of “the masses” becoming free from inner discipline and dependent on external direction (1929s) popular revolt), Hannah Arendt deepened her insight by arguing that such fragmentation is essential to domination and that a society of unfixed individuals is easier to manage (origins of totalitarianism1951). Similarly, Günther Anders described “obsolescence” as humans struggling to adapt to machines, rather than the other way around.human anachronism1956 and 1980).
If AI rewards performance rather than presence, we risk accelerating this trend toward democratization. Instead of asking what AI can do, we should ask: AI-powered systems. If AI is essentially an amplifier of data, decision-making, and organizational structure, its most immediate dangers are not technical in nature, but systemic. AI reflects that deployment logic. Built into extraction systems to scale extraction. Being completely surrounded by economic thinking, we will be blind to the same qualitative signals, creating the same lack of accountability that Dan Davis vividly portrayed in the 2024 movie. irresponsible machine.
But if we place AI in civic architecture, can we extend deliberation, contemplation, and consideration?
Designing for pluralism, direction, and civic imagination
Taiwan's Digital Minister Audrey Tan said that democracy itself is a form of social technology. Like any technology, it can be iterated, refactored, and improved. But suppose AI helps democratic evolution. In that case, it must be managed not as an object of innovation, not as a neutral, inevitable, unquestionable progress, but as a simple matter of infrastructure choice.
There is no need for a blueprint for such an alternative horizon. All you need is direction and principles. For example, decentralization rejects the default assumption that size means monopoly. Federated learning, data trusts, and publicly owned infrastructure offer multiple alternatives to centralized control, including initiatives currently starting across Europe.
Another approach is to design systems in which human judgment and machine accuracy coevolve, emphasizing structural coupling over exchange. Estonia’s X-Road platform is a notable example of integrating automated services while maintaining public oversight and legal clarity.
The third is plurality, which promotes ambiguity and cultural specificity and leverages AI to make the richness of unusual language in, for example, plant names widely available. In Rwanda, for example, participatory planning tools are being incorporated into local land management systems to address rather than eliminate friction. We must also return to interpretive education and strengthen liberal arts and civic inquiry beyond mere technical proficiency. At Olin College and other experimental institutions, AI is deployed in parallel with ethics, phenomenology, and systems theory, rather than alone.
Aiming for institutional clarity
Shaping the direction of AI development is not just the prerogative of engineers and CEOs. Citizens have a role to play, not just in using machine learning, but also in contesting the criteria by which the system is justified. In a world of scoring systems and opaque algorithms, their legitimacy must be understood as a citizen concern.
Doing this means building a distributed grammar for AI. These must be participatory, open-ended, and frictionless. There must be a space where AI becomes a partner in meaning-making, rather than a substitute for judgment. The civic technologist movement, from Taiwan to Barcelona to Mexico City, shows that such a grammar is already emerging.
AI's next chapter will be determined by sensemaking, not scale. The challenge ahead is not to predict innovation, but to shape the institutions that create meaning. This responsibility does not lie solely with engineers and management. It equally belongs to educators, artists, legal scholars, civic designers, and anyone interested in how power is constructed and shared. The future of AI needs to be written not just in labs and boardrooms, but in classrooms, libraries, courthouses, and city halls. Code is important, but standards are even more important. We must insist on plurality, transparency, and democratic oversight before today's defaults calcify into tomorrow's systems. It's time to stop optimizing illusions and start designing your home.
support SSIRhighlights cross-cutting solutions to global challenges.
Help us bring more innovative ideas to life. Donate now.
Read more articles by Jeff de Kleijn and Antoine Fourmont-Fingal.
