AI is changing more than your writing and may be shaping your worldview

AI News


The use of ChatGPT, Claude, and other large-scale language models (LLMs) (what most people refer to as “AI”) has skyrocketed since ChatGPT was published in 2022. According to recent estimates, hundreds of millions of people now use these tools every week.

Users may think these tools just help organize their thoughts, but recent research suggests they may be doing something more subtle and powerful, influencing the way we all think, speak, and even understand the world.

In a recent opinion piece, researchers at the USC Dornsife College of Letters, Arts and Sciences investigated how artificial intelligence systems like ChatGPT can steer people toward similar ways of communicating and reasoning, a process the researchers call “cultural homogenization.”

“AI is no longer just a reflection of culture,” said lead author Yalda Daryani, a doctoral student in social psychology at USC Dornsife. “It’s actively shaped. It determines what sounds polite, what sounds clear, and even what counts as a good answer.”

So the researchers set out to understand how large-scale language models like ChatGPT, Anthropic’s Claude, and Google’s Gemini influence human culture on a global scale, and how policy can address the wide-ranging effects these LLMs may have.

Patterns emerge with the use of AI

The researchers, under the guidance of Morteza Dehghani, professor of psychology and computer science and director of the Moral Language Laboratory at USC Dornsife, reviewed a wide range of recent research across psychology, computer science, and linguistics to understand how LLMs work across different cultures and how people react when using AI in real-world tasks such as writing and decision-making.

They found a consistent pattern. This means that AI systems tend to reflect and enhance a narrow slice of human experience.

The study’s central finding is that these systems are often aligned with what researchers call the “WHELM” perspective: that of Western, high-income, well-educated, liberal, and male. In other words, it reflects the most common values ​​and communication styles in English online data.

“If you ask AI for advice, you won’t get a neutral answer,” Daryani said. “Even if it’s not explicitly written, you’re getting the perspective of a very specific group of people.”

This pattern shows up in the way AI handles moral issues. The study showed that AI systems tend to favor values ​​such as individual freedom and fairness, and less on ideas such as tradition, authority, and community that are more central to many non-Western cultures.

AI’s impact extends to subtle social interactions

Its influence goes beyond values. It also affects how people communicate.

“When millions of people use AI to create messages, those differences start to disappear,” Daryani says. “Over time, we may all end up sounding very similar.”

Even when users ask questions in other languages, the model often returns examples related to American or European culture, such as American holidays or English-language movies, but provides less detailed or more stereotypical descriptions of non-Western traditions.

Deghani says this pattern creates a kind of feedback loop. “The more we rely on these systems, the more their output becomes part of our shared knowledge, and that same material is used to train the next generation of AI, so the cycle gets stronger.”

Over time, researchers warn, that loop can gradually narrow the range of ideas, traditions and communication styles that people are exposed to and can communicate.

Why is it important? Because cultural diversity is about more than just language and customs, researchers say. It shapes how people think, solve problems, and make decisions. A broader range of perspectives leads to better solutions and more creative ideas. If that diversity shrinks, researchers argue, societies could lose important ways of understanding the world.

How to build better AI

Notably, the research team does not suggest that AI is inherently harmful. LLMs make writing easier, improve access to information, and help people communicate more clearly. The concern, researchers say, is what happens when a small number of systems start influencing billions of interactions every day.

“Once a system is trained on a narrow dataset, it’s very difficult to undo it,” Daryani says.

To address this issue, the team builds on their research findings and outlines a three-part approach that starts with the data used to train the model. Most AI systems learn from English-language content that draws heavily from Western sources. Researchers say developers should include more material from different languages, regions, and cultural traditions to capture cultural knowledge that may be systematically underrepresented.

The researchers suggest that later training stages aimed at refining and evaluating LLMs incorporate culturally diverse examples and consult experts such as psychologists, anthropologists, linguists, and policy makers who work with and respond to diverse cultural communities to ensure that responses reflect different social norms and values.

It also recommends changing the way training results are evaluated. At this stage, high-tech companies employ workers from different countries, who are trained to apply standardized Western evaluation criteria. Instead, reviewers must evaluate answers based on multiple criteria.

Taken together, these changes could enable AI systems to recognize that there is no one “right” way to communicate and reason, and to maintain a broader human perspective as technology continues to evolve.

For Daryani, the stakes are clear. “A language, a tradition, a way of thinking. Once it’s gone, you can’t get it back. The question is not whether it’s hard to fix this. The question is whether we can afford not to fix it.”

About research

Jival Sourati, a doctoral student in the USC Viterbi School of Engineering, is a co-author of the report. Policy insights from behavioral and brain sciences.



Source link