AI is not neutral and is rooted in power, capitalism, and the reduction of humans.

Machine Learning


A new philosophical study by Niklas Rautenberg of the University of Hamburg raises urgent questions about the trajectory of artificial intelligence (AI), arguing that today’s AI boom is not a technological collapse but the latest phase in a centuries-old effort to transform the world and human life into measurable and controllable systems. This paper explores how modern AI reflects a deeper intellectual tradition formed by Edmund Husserl, Martin Heidegger, and Herbert Marcuse, and offers a new critique of how machines are reshaping not only the economy but also human self-understanding.

Published in AI and societya study titled “Artificial Intelligence, Computational Reason, and Technological Domination: Lessons from Husserl, Heidegger, and Marcuse.” Modern machine learning and generative AI systems are part of a long historical project aimed at creating a “computationally readable” representation of reality, which they argue has far-reaching political and social implications.

AI is the culmination of centuries of efforts to mathematicalize reality

This study traces the philosophical roots of AI to the early development of Western science, particularly the transition from everyday practical knowledge to abstract mathematical reasoning. Based on Husserl’s analysis, the author argues that modern science has begun a process of “mathematization” that changes the way we understand reality.

In this framework, the world is no longer experienced primarily through living, sensory engagement, but through abstract models and calculations. Over time, this change led to the belief that all aspects of reality, including human thoughts and actions, can be quantified and predicted.

Research shows that modern AI represents the most advanced stage of this process. Machine learning systems work by identifying statistical patterns within large datasets, effectively translating complex human activities into mathematical relationships. Generative AI goes further by generating text, images, and music, extending this logic into creative realms once thought to be uniquely human.

This paper argues that this transformation comes with hidden costs. As mathematical models become more authoritative, everyday knowledge and human experience become less and less valuable. What cannot be measured or calculated is often dismissed as subjective or unreliable.

This change has broader societal implications. This research highlights that AI systems are often perceived as objective and unbiased, reinforcing trust in algorithmic decision-making across areas such as recruitment, healthcare, and governance. However, this recognition obscures the fundamental assumptions and limitations of these systems.

The author warns that increasing reliance on AI risks replacing delicate human judgment with simplified computational logic, narrowing the way we interpret reality and act.

From calculation to control: AI and the rise of a control-oriented worldview

Based on Heidegger’s philosophy, this study goes beyond mathematization to investigate the deeper logic driving AI development. He argues that the drive to quantify the world is rooted in a desire for broader control and control.

Heidegger’s concept of ‘framing’ is central to this analysis. In this view, modern technology encourages people to see the world not as a complex living environment, but as a set of resources to be managed and exploited. Everything, including humans, becomes part of the available “standing reserve.”

This study argues that AI embodies this idea in a particularly powerful way. Unlike previous technologies, AI systems actively shape the way people understand themselves and others. Through data collection and analysis, we transform human behavior into measurable inputs, reinforcing the idea that individuals can be optimized like machines.

Examples cited in the paper include wearable health devices that track bodily functions, AI-generated content that turns creativity into reproducible data, and chatbot interactions that redefine relationships in terms of efficiency and availability. These developments reflect broader changes in the way human lives are valued, the study argues.

Under this logic, traits such as emotional depth, unpredictability, and individuality are increasingly seen as inefficiencies rather than strengths. The result is a kind of self-alienation in which people begin to see themselves through the same computational lens applied by machines.

The study also highlights the risk of this mindset becoming dominant. As AI systems become more integrated into daily life, patterns of thinking that prioritize control, prediction, and optimization will be reinforced. Over time, this can limit alternative ways of understanding the world and make it difficult to question or resist the underlying system.

As the authors warn, this trend can lead to a significant loss of agency, as individuals begin to accept algorithmic logic as the default framework for decision-making.

Strengthening the domination of capitalism, AI, and technology

Moreover, Marcuse’s critical theory places AI within the economic structure of modern capitalism. We argue that the development and deployment of AI cannot be understood separately from the systems that produce and benefit from it.

According to the paper, capitalism provides the conditions for AI’s computational logic to become socially dominant. The integration of scientific methods into industrial production, combined with the pursuit of efficiency and profit, creates a feedback loop in which technological innovations strengthen existing power structures.

AI will play a key role in this process by increasing productivity, optimizing the workforce, and enabling new forms of monitoring and control. The study cites the rise of algorithmic management, targeted advertising, and data-driven decision-making as examples of how AI is being integrated into economic systems.

These technologies not only improve efficiency but also shape human behavior. By influencing consumption patterns, labor practices, and social interactions, they create what Marcuse described as a “one-dimensional” society, where alternative ways of thinking and living are marginalized.

This paper argues that this dynamic extends to the development of AI itself. This industry is dominated by a relatively small group of companies and individuals whose priorities and values ​​are reflected in the systems they build. This concentration of power raises concerns about whose interests are being protected and whose voices are being excluded.

The authors also point to the role of AI in reinforcing existing inequalities. Training data often reflects dominant cultural perspectives, while marginalized communities are underrepresented or misrepresented. This can lead to biased outcomes and further entrench social disparities.

The benefits of AI are unevenly distributed. Some groups benefit from increased productivity and convenience, while others face displacement, less autonomy, and increased scrutiny. The result is a complex system in which technological progress is closely linked to economic and political power, making it difficult to separate innovation from its broader effects.

A call for critical engagement rather than technical rejection

It’s worth noting that this study doesn’t recommend abandoning AI. Instead, a more thoughtful and politically aware approach to technology is needed. The authors argue that understanding AI’s historical and philosophical roots is essential to addressing its current challenges. Recognizing the underlying assumptions driving its development allows policy makers, researchers, and users to better assess its impact and explore alternative avenues.

Collective action, including critical research, organized resistance to unchecked technology adoption, and collaboration with affected communities, is essential. This suggests that meaningful change requires not just technological solutions, but broader social and political transformation.



Source link