A story of man vs. machine
in classic movies 2001: A Space OdysseyPublished in 1968, the spacecraft bound for Jupiter is equipped with an onboard computer named HAL 9000. Over time, this computer becomes a deadly adversary for the astronauts on the ship. After an apparent computer error, several crew members attempt to switch off HAL. In the name of protecting the spaceship’s secret mission, HAL kills the crew. Eventually, however, astronaut Dave Bowman succeeds in shutting down HAL, ignoring the computer’s desperate requests to shut it down.
Article continues after ad
in terminator (1984), the conflict between humans and self-preserving AI is further evolved, and the conflict becomes a matter of survival for humanity as a whole.
Many of these same tropes appear in films such as: matrix (1999), me, robot (2004), transcendence (2014), Ex Machina (2015), M3 gun (2022), creator (2023) etc. These reflect a particular fear of AI, which is further amplified by the visible numbers of the technology industry this century. In other words, we are moving towards a conflict between humans and machines. Elon Musk claimed at the Bletchley Park AI Summit that AI is “one of the biggest threats to humanity” and that for the first time we are facing “something that is going to be much more intelligent than we are.” OpenAI’s Sam Altman argued that generative AI could bring about the end of human civilization, and that AI poses the same risk of extinction as nuclear war or a global pandemic.
Technology is not destiny. Just as people create technology, they determine how it is used and what it serves.
Even in academia, this story has found some resonance. Philosopher Nick Bostrom has written extensively about the existential risks of AI to humanity and the potential for an intelligence explosion in which AI continues to improve itself even after reaching human levels. Computer scientist Stuart Russell, along with his collaborators at the University of California, Berkeley’s Center for Human-Compatible Artificial Intelligence, is working with so-called Alignment issues—In other words, it’s a matter of matching machine objectives with human objectives.
Another dystopian narrative, almost as frightening, is that AI won’t kill us, but it will make human workers obsolete, inevitably causing mass unemployment and social unrest. For example, a 2023 Goldman Sachs report claims that generative AI could replace 300 million full-time workers in Europe and the United States.
Stories told in Hollywood and Silicon Valley tend to feature heroic conflicts between humans (usually humans) and machines, namely Dave Bowman and HAL 9000. space journeyKyle Reese and the Terminator, Nathan and Eva Ex Machinaor Sam Altman and the extinction of humanity by AI. This academic version of the story, told by computer scientists, also tends to feature humans and machines, where there are machine value alignment problems (i.e., misspecified goals) and machine bias toward goals.
What is missing from the old story?
One of the key issues that the human versus machine narrative misses is that technology is not destiny. Just as people create technology, they determine how it is used and what it serves. As AI is developed and deployed, these decisions will be iterated over and over again. Moreover, AI is ultimately not that complex. Anyone can understand how AI works. The real conflict is not between humans and machines, but between different members of society. And the answer to AI’s various risks and harms is to publicly control its goals through democratic means.
At the heart of AI is automated decision-making using: optimization. In other words, AI algorithms are designed to make measurable goals as large as possible. Such an algorithm might, for example, maximize the number of times someone clicks on an ad. Therefore, in AI, someone has an objective, i.e. reward— It is optimized. Someone literally has to type into a computer, “This is the measure of compensation we care about.”
The important question then is who chooses that person. the goal About AI system. We live in a capitalist society, where the purpose of AI is typically determined by the owners of capital. The owner of the capital is means of prediction Building AI requires data, computational infrastructure, technical expertise, and energy. More generally, the purpose of AI is determined by those with societal power, such as the criminal justice system, education, health care, or the secret police of an authoritarian surveillance state.
One of the areas where AI will be introduced into society is the workplace. AI is used in Amazon’s robotized warehouses, in the algorithmic management of Uber drivers, and in the selection of job applicants by large corporations. AI is also used in important areas outside the workplace, such as filtering and selecting Facebook feeds and Google search results with the aim of maximizing ad clicks. The third area is predictive policing and incarceration of defendants awaiting trial based on predictions of crimes not yet committed. Perhaps most devastatingly, AI is also being introduced into warfare. For example, it was used to decide which households to bomb in the Gaza Strip starting in 2023.
Of course, many researchers and critics have warned of the dangers of using AI in these consequential areas. Joy Buolamwini, a computer scientist at the MIT Media Lab, has written extensively about the dangers of inaccurate and racially biased facial recognition systems. Ruha Benjamin, a sociologist at Princeton University, emphasized that AI has the potential to reproduce and reinforce existing social inequalities in areas such as education, employment, criminal justice, and health care. Similarly, computer scientist Timnit Gebru, writing during his time at Google, warned of the dangers of large-scale language models acting as probabilistic parrots. Language models repeat language patterns without understanding them, thereby reproducing biases embedded in the training data. Meredith Whitaker, the current president of the Signal Foundation, criticizes the political economy of the tech industry, where AI is used by powerful actors to perpetuate marginalization. Kate Crawford, a professor at the University of Southern California and co-founder of the AI Now Institute, emphasizes the nature of AI as an exploitative and exploitative industry.
The problem is that the goals optimized by algorithms are good for those who control the predictive instruments, but bad for the rest of society.
Amidst these overlapping critiques, each focusing on different aspects and pitfalls of AI, it is difficult to formulate a systematic way of thinking about AI in society. One possible unifying perspective is provided by computer science. Computer scientists are trained to think of most problems as optimization problems. In this context, optimization involves finding decisions that make a given reward as large as possible given limited computational resources and limited data.
A computer science perspective has informed much of the public debate about AI safety and AI ethics, especially when it comes to topics like fairness and value alignment: “If something is wrong, there must be an optimization error.” In this view, the problem is simply that an action was chosen that does not maximize the specified objective. However, this perspective mostly does not get to the heart of the matter, since it has nothing to do with the choice of objectives themselves.
I argue that conflicts of interest over control of AI goals, rather than optimization errors, are the central issue. When AI causes harm to humans, the problem is usually not that the algorithm is not fully optimized. The problem is that algorithmically optimized goals are good for the people who control the predictive instruments, such as Amazon founder and former CEO Jeff Bezos and Meta founder and CEO Mark Zuckerberg, but they are not good for the rest of society.
This understanding changes how we think about possible solutions to AI problems. How do we address AI ethics and AI safety when the fundamental problem lies with the parties setting AI goals? How do we choose those goals in a way that serves the general public rather than just a powerful minority? But the solution to the AI ethics and safety problem is: democratic control. Democratic control is not limited to democratically elected national governments. Collective democratic decision-making can exist at different levels, including the workplace, nation-states, and the global level.
The challenge, of course, is that democracy is difficult. Democratic control of new technologies like AI requires public deliberation, but given the view held by many (and reinforced by the tech industry) that AI is extremely complex, such public deliberation may seem impossible.
But despite all the jargon, and despite keeping up with the latest innovations, the basic idea of AI is neither that complicated nor that new. And we can all understand them. No matter who you are, don’t let anyone tell you that you aren’t the “type” to understand AI.
________________________________

from Predictive tools: How AI actually works (and who benefits) Written by Maximilian Cayce. Copyright © 2025. Available from University of Chicago Press.
