
Humans need to find ways to keep AI from influencing the choices and preferences we make and express, said John Wibey, an associate professor of media innovation and technology at Northeastern University. Photo by Matthew Moduno/Northeastern University
Experts at Northeastern University say that in the not-too-distant future, most of the information people consume online will be influenced by artificial intelligence.
While it's impossible to slow down the use of AI, understanding AI's limitations — both what it can't do and what it shouldn't do — and adopting ethical standards in its development and deployment is crucial, said John Wiebe, an associate professor of media innovation and technology.
Otherwise, says Wibby, democracy is in danger.
According to him, today's democracies are complex systems in which people collectively process information and solve problems. The knowledge and information consumed by citizens play a key role in underpinning democratic life.
Chatbots can simulate human conversation and perform routine tasks effectively, while AI agents are autonomous intelligent systems that perform tasks and resolve customer requests without human intervention. Wiby said chatbots could soon replace humans in information fields such as journalism, social media moderation and voting.
“As AI systems start to shape public narratives and regulate and control public knowledge, there's the potential for a kind of lock-in in terms of how we understand the world,” Wibby said.
AI and large-scale language models are trained and generate content based on past data about people's values and interests, and Wieby says they will continually reinforce past beliefs and preferences, creating feedback loops and echo chambers.
The risk of this feedback loop is set to continue recurring, he says.
In journalism, Wibby said AI may be further integrated into newsrooms to discover and verify information, categorize content, analyze social media at scale and even automate coverage of events such as city and government meetings.
Entire municipalities or even larger regions, so-called “news deserts,” could end up being covered by AI agents, he says.
On social media, Wibby said, AI moderators whose decisions are based on outdated data and don't align with current human preferences may over-moderate or remove user posts and comments, a key arena for modern human deliberation.
If they fail to keep up with the rapidly changing human context, chat moderators may also be subject to a feedback loop: their actions influence what becomes public knowledge, or what humans believe to be true and worthy of attention.
AI-driven simulations in polls can skew results and influence the public's conclusions. Such distorted knowledge can then iteratively influence human preferences and decisions in democratic settings, such as what people believe and who they vote for, creating a recursive spiral.
Wibby said that by its very nature, AI models will never be able to accurately predict how people will react to anything or the outcome of an election.
“Some of the research on whether AI can help simulate human polling shows that this is the case when the data is not yet well established in the model,” he says. “In politics and social life, a lot of what matters is fundamentally emergent.”
“Until humans, individually and collectively, encounter areas of challenge, concern and anxiety and begin to make personal and collective decisions, it remains to be seen what humans will think and do.”
Wiby says further research could extend to online search and discovery: For example, Google's new AI Summary feature, which consolidates queries into a single response, could lead to users bypassing the traditional process of browsing, discovering, considering and reasoning.
These limitations and imperfections in AI models force humans to distinguish between areas where AI can facilitate collective cognition and those we want to preserve as human-centered zones for independent thought.
“At this deeper level, it's about human freedom and agency,” Wibby says, “but I also think it's about humans being able to legitimately express new kinds of ideas and preferences that don't conform to the past.”
Humans need to find ways to ensure that AI doesn't influence the choices they make and the preferences they express, he says.
“If we're really going to respect people, we have to make sure these models are incredibly humble,” Wibby says.
AI chatbots already mimic the authority of experts, giving answers with a fair amount of confidence, even though the answers are often incorrect, he says.
“I don't think models should pretend to be human experts in their voice, their phrasing, their composition, or the way they do things,” Wibby said. “AI should not look, feel, or act like human intelligence.”
These are simply probability models, compiled from the data they've been trained on, Wibby said.
Governments and large institutions have a role to play in protecting democratic values by helping to address the risks posed by AI, Wiebe said, but at the same time, there is also the danger that governments could use AI-driven systems for their own purposes.
“Any discussion about AI, public knowledge and democracy must grapple with the vast differences in information environments around the world,” Wibby says.
Provided by Northeastern University
This story is reprinted with permission from Northeastern Global News news.northeastern.edu
Quote: Could AI-generated content be a threat to democracy? (June 6, 2024) Retrieved June 6, 2024 from https://techxplore.com/news/2024-06-ai-generated-content-threat-democracy.html
This document is subject to copyright. It may not be reproduced without written permission, except for fair dealing for the purposes of personal study or research. The content is provided for informational purposes only.
