AI-powered writing assistants like ChatGPT are making their way into everything from office tools to search tools, and they do.
Researchers found that people using AI writing tools programmed with certain ideas (for example, that social media is good) were with assistants.
Cornell University’s Maurice Jaikesh asked more than 1,500 participants to write an essay answering the question, “Is social media good for society?”
People who used an AI writing assistant with biases for or against social media were twice as likely to write a sentence that agreed with the assistant, and shared the same opinion, compared to those who wrote without AI assistance. The odds of answering yes were significantly higher.
read more:
The biases ingrained in AI writing tools, whether intentional or unintentional, can have profound cultural and political implications, researchers said.
Co-author Moll Naaman, professor at the Jacobs Technion Cornell Institute at Cornell Institute of Technology, said:
“Apart from increased efficiency and creativity, there may be other implications for individuals as well as our societies: changes in language and opinions.”
This is the first study to show that the process of writing with AI-powered tools can sway people’s opinions.
Jakesh guided a large-scale language model to have a positive or negative opinion of social media.
read more:
Participants wrote paragraphs alone or with one of their opinionated assistants on a platform that mimicked the social media website he built.
As participants type, the platform collects data such as which AI suggestions they accepted and how long it took them to create a paragraph.
Those who co-wrote with a social media pro-AI assistant produced more articles claiming that social media was good, and vice versa, compared to participants without a writing assistant. These participants were also more likely to profess their assistant’s opinion at follow-up.
Researchers explored the possibility that people are simply accepting AI suggestions to complete tasks faster.
But even those participants who spent a few minutes crafting their paragraphs came up with remarks that had a strong impact.
The study found that the majority of participants were unaware that AI was even biased and were unaware that they were being affected.
“I don’t feel very persuaded in the co-writing process,” says Naaman. “I feel like I’m doing something very natural and organic. I’m expressing my thoughts with some help.”
When repeating the experiment with a different theme, the research team again noticed that the participants were being swayed by the assistant. The team is currently investigating how this experience creates change and how long those effects last.
Just as social media has changed the political landscape by facilitating the spread of misinformation and the formation of echo chambers, biased AI writing tools can also create similar opinions depending on which tools users choose. can cause changes in
For example, some organizations have announced plans to develop ChatGPT alternatives designed to express a more conservative point of view.
The researchers said there should be more public discussion about how these technologies could be abused and how they should be monitored and regulated.
“The more powerful these technologies become, and the more deeply embedded they become in society’s social fabric, the more careful we may need to be about how we manage the values, priorities, and opinions embedded in them. No,” said Jakesh. ”
Watch: Artificial Intelligence Gives Fashion Advice
