This has broader implications, as modern culture is increasingly influenced by exactly this kind of pipeline. Convert images into text. The text will be converted to an image. Content is ranked, filtered, and regenerated as you move between words, images, and videos. New articles on the web. Even when humans stay informed, we often choose from AI-generated options rather than starting from scratch.
The results of this recent study indicate that the default behavior of these systems is to compress meaning toward what is most familiar, recognizable, and easy to reproduce.
Cultural stagnation or acceleration?
In recent years, skeptics have warned that generative AI could flood the web with synthetic content and lead to cultural stagnation. Over time, the argument is that this recursive loop will reduce diversity and innovation.
Defenders of the technology pushed back by pointing out that: They argue that humans are always the final arbiters of creative decisions.
What is missing from this discussion is empirical evidence showing where homogenization actually begins.
The new study did not test retraining on AI-generated data. Instead, it points to something more fundamental. Homogenization occurs before retraining enters the problem. The content naturally generated by generative AI systems when used autonomously and repeatedly is already compressed and generic.
This is a reformulation of the stagnation argument. The risk is not only that future models may be trained on AI-generated content, but that AI-mediated culture is already filtered in a way that favors the familiar, the explainable, and the traditional.
Retraining further amplifies this effect. But that’s not its source.
this is not a moral panic
Skeptics are right about one thing: cultures have always adapted to new technologies. Photography did not kill painting. Movies didn’t kill theater. Digital tools have made new expressions possible.
But these early technologies did not force us to endlessly reshape culture across different mediums on a global scale. They didn’t summarize, reproduce, and rank cultural artifacts like news articles, songs, memes, academic papers, photos, and social media posts millions of times a day based on the same built-in assumptions about what’s “typical.”
This study shows that when meanings pass through such pipelines repeatedly, diversity collapses not because of malicious intent, malicious design, or corporate negligence, but because only certain kinds of meanings survive repeated text-to-image transformations.
This does not mean that cultural stagnation is inevitable. Human creativity is resilient. Organizations, subcultures, and artists have always found ways to resist homogenization. However, in my view, the research results show that stagnation is not a speculative fear, but a real risk if the production system continues to operate in its current iteration.
They also help clarify common misconceptions about AI creativity. In other words, creating infinite variations is not the same as creating innovation. The system can generate millions of images while exploring just one corner of a cultural space.
My thesis found that achieving novelty requires designing AI systems with incentives to deviate from the norm. Without this, the system would optimize with familiarity in mind since it is best learned. This study empirically supports this point. Autonomy alone does not guarantee exploration. In some cases, convergence is facilitated.
This pattern is already showing up in the real world. One study found that AI-generated lesson plans were oriented toward traditional and uninspiring content, highlighting that AI systems converge on the typical rather than the unique or creative.
