IA blue shirt, a gift from my sister-in-law, did it all. This reminded me of Yakov Petrovich Golyadkin, a lowly bureaucrat in Fyodor Dostoyevsky's novel The Double. The novel is a perplexing portrait of a fragmented self within a vast and dehumanizing feudal system.
It all started with a message from a respected colleague congratulating me on a video talk on geopolitical topics. When I clicked on the attached YouTube link and remembered what I said, I started to worry that my memory wasn't what it used to be. When did you record that video? After a few minutes, I knew something was wrong. Not because I thought there was anything wrong with what I said, but because I realized that the video showed me sitting at my desk in my office in Athens, wearing a blue shirt and having never left my island home. After all, it was a video featuring my deepfake AI doppelgänger.
Since then, hundreds of such videos showing my face and incorporating my voice have proliferated on YouTube and social media. Even this weekend, a new piece of deepfaked footage of me saying fictitious things about the coup in Venezuela emerged. They lecture and say things I would say mixed in with sometimes things I would never say. They get furious and bossy. Some are crude, others are disturbing and compelling. Supporters send me messages asking, “Giannis, did you really say that?'' Opponents distribute them as evidence of my stupidity. Even worse, some people claim that my doppelganger is more articulate and convincing than I am. And I found myself in the strange position of being a spectator in my own digital puppet show. This is a specter in the techno-feudal machine that I have long argued is not just broken, but designed to usurp power.
My first reaction was to write to Google, Meta, etc. and demand that these videos be removed. Several forms were filled with outrage before some of these channels and videos were deleted over a week later, only to quickly reappear in a different guise. Within a few days I gave up. No matter what we do, no matter how many hours a day we spend trying our luck trying to get big tech companies to take down their AI doppelgängers, more of them will grow again like Hydra.
Soon, anger turned to contemplation. After all, wasn't I the one who argued that big tech was not just digitizing capitalism, but actually ushering in a massive transformation by turning markets into cloud fiefdoms and turning profits into cloud rents? Wouldn't my AI doppelgänger perfectly confirm that in this techno-feudal reality, the liberal individual is dead and buried?
Acknowledging a partial loss of self-ownership, I sought solace in rationalizing these deepfakes as the ultimate act of feudal enclosure. This is proof that under techno-feudalism we own nothing – not the data output of our labor, not our social graph, and now not even our audiovisual identity. Our new lords see us as tenants in the clouds, androids who can use their likeness at will to sow confusion, muddy debate, and drown genuine dissenting voices in a cacophony of synthetic noise created for this purpose.
But then a brighter thought came to mind, reminding me of ancient Athens. What if my AI doppelganger is a harbinger of what's next? isegoria (ἰσηγορία), a principle as glorious and promising and yet absent as true democracy itself? When we asked several versions of AI chatbots to define it, they all dutifully misrepresented and misdefined its meaning. isegoria Things like equality of speech, or the right to be heard, or the freedom to address Congress. But that is not what the Athenians meant by this word. In fact, for them isegoria It meant the exact opposite of what we mean by “free speech” today, and they dismissed it as an abstract right shouted into the air. For the Athenians, it meant the right to have your opinion seriously evaluated on its merits, regardless of who you were or how well you could actually express it.
Possibility of AI-powered deepfake salvage isegoria Can we escape the clutches of a technological feudal dystopia? If we find it impossible to see who is speaking in a YouTube video, won't we be forced to judge the quality of what is being said rather than who is speaking? In the process of discrediting, big tech companies may inadvertently isegoria chance? These questions brought a glimmer of hope.
It was the hope that the specter of democracy might still hang over our heads, if we could look up and find the motivation to engage in the slow, difficult democratic labor that algorithmic feeds were designed to obliterate: critical evaluation of the views and arguments thrown at us. Sadly, this hope, while concrete, is insufficient as long as our techno-feudal lords retain two huge asymmetrical advantages.
First, they agora itself – servers, feeds, algorithmic means of communication. They can whitewash their speech with digital stickers to make it seem real while drowning our speech in a quagmire of doubt and noise. result? do not have isegoria, But truth is a digital sacred right, the patent property of power.
Second, and more insidiously, they don't need deepfakes to dominate. their ideology is machine: the power to extract surplus value from the proletariat connected to the cloud through various digital devices, the logic of extracting cloud rentals from vassal capitalists on that platform, the tyranny of shareholder value, the success of their impending privatization of money.
Our task, therefore, is not to beg these lords for validation. Our mission is political. We need to socialize crowd capital, an all-powerful new force that will transform society and destroy everything that makes humanism possible.
Until then, let your digital doppelganger do the talking. Perhaps they will saturate the scene so much that we will finally stop listening. our voice and start making decisions argument At our own convenience. This is perhaps the most paradoxical sliver of hope in the Hall of Mirrors. But in this carnival, we will grasp every bit as much as we can.
