Study finds that AI is changing the style and content of human writing

Applications of AI


Does money lead to happiness?

Researchers at the West Coast Universities Association were interested in how 100 human participants would respond to this age-old question, but not in the pursuit of their own happiness. Instead, the researchers wanted to know how participants’ use of the AI ​​system affected their (written) responses.

The researchers found that users who relied heavily on large-scale language models (LLMs) produced answers that were significantly different in meaning from those of participants who relied only partially on LLMs or avoided using LLMs altogether, suggesting that in addition to changing writing style, heavy use of AI is changing the nature of human discussion.

“LLM moves essays away from anything ever written by humans,” said Natasha Jacks, one of the study’s lead authors and a computer science professor at the University of Washington, highlighting the “branding” of writing that relies on AI systems. “They just change human writing on a very large scale, and it’s completely different from what humans would do otherwise.”

The new study, which has been peer-reviewed and accepted for an upcoming workshop at a major AI conference, found that people who relied heavily on LLM were 69% more likely to write essays with neutral answers to happiness questions than participants who did not use AI or who only used AI for light editing. Study participants who used AI less frequently or avoided AI altogether submitted far more passionate essays about the relationship between money and happiness, either positively or negatively.

In addition to AI’s impact on the meaning of essays, researchers also found that greater reliance on AI systems changed the overall style of users’ output, making their language less personal and more formal.

After the experiment, participants who relied heavily on the AI ​​reported that their essays were significantly less creative and expressed less of their own opinions. At the same time, these participants reported similar levels of satisfaction with the final product compared to participants who used less AI, raising concerns from the authors and outside experts about the long-term effects of humanity’s increased use of AI systems.

“This study highlights that LLMs cannot follow people’s preferences or personalize how humans write essays,” said Jacks, who is also a senior researcher at Google DeepMind, one of the world’s leading AI companies. “The ideal LLM writes the essays you would write, saving you time.”

“I’m not doing that at all. I’m writing a completely different essay.”

This study evaluated the impact of three major AI systems widely used in 2025: Anthropic’s Claude 3.5 Haiku, OpenAI’s GPT-5 Mini, and Gemini 2.5 Flash. In initial testing, researchers found that half of the participants either refused to use LLM at all or used it only to find information rather than generate new content. To better categorize the large number of participants, the researchers defined heavy AI users as those who reported using LLM to generate at least 40% of the text written for the experiment.

The authors found that users who relied heavily on LLMs submitted essays with 50% fewer pronouns. This represents a major shift towards impersonal language, with fewer anecdotes and references to human experience.

In addition to experiments on the impact of money on happiness, the new paper analyzes differences in how LLMs edit different essays compared to humans, and investigates how the use of AI affects the criteria scientists employ to determine whether a paper should be accepted at a major AI conference.

To compare how LLMs edit existing texts to humans, Jacks and his colleagues used a database of human-written essays from 2021 to evaluate texts published before LLMs were widely adopted.

By asking LLMs to revise human essays based on human feedback from the original human-authored dataset, the study authors found that three major AI systems made far more extensive edits than human editors in the same situation, and that the AI-assisted edits also changed the meaning of the underlying essays.

While human editors often made changes that replaced individual words and left much of the original vocabulary intact, LLM “replaces far more of the original text than when humans revise their own work,” according to the paper.

“This word displacement leads to a loss of personal voice, style, and meaning, as each writer’s unique lexical fingerprint is overwritten by a particular model’s preferred vocabulary,” the authors write.

Thomas Juzek, a professor of computational linguistics at Florida State University who was not involved in the study, said the paper is a valuable contribution to a rapidly growing field of interest.

“This is a really good paper,” Juzek told NBC News. “What really struck me was this kind of illusion that you’re performing a grammar check using LLM. This study shows that users may think they’re just performing a simple language check, but the model is doing more than that.”

“What does this mean for thinking, language, communication and creativity in the future?” asked Juzek.

Jacks argued that the AI ​​system’s language-changing behavior could be a result of current training methods, which could reward graders’ preferred manipulations.

“If you’re training a model based on human feedback, the model has no boundaries or the ability to know the difference between satisfying the human and actually changing the human to make it easier to meet the human’s preferences,” Jacks says. She suggested that humans’ reliance on LLMs to write might be similar to how YouTube’s recommendations change people’s preferences about what kinds of YouTube videos they enjoy the most.

Looking forward, Jacks said he hopes to see more research into the long-term effects of AI systems on human values, expressions, and institutions, especially as more AI researchers rely on them in their research.

“Humans value clarity, relevance, and impact; AI values ​​scalability and reproducibility,” Jacks told NBC News. “It’s changing our conclusions, and it’s already impacting our existing institutions.”

In his book, Jacks said he avoided using AI to write new papers. Instead, she said she uses the LLM and its shortcomings as inspiration for her own writing.

“Sometimes I put into the LLM a crappy version of what I’m trying to say conversationally,” Jaques says. “It usually produces something that motivates me to write it myself.”



Source link