AI chatbots fuel conspiracy theories ahead of election

AI For Business


A fake video of Ukrainian President Volodymyr Zelensky calling on soldiers to lay down their arms went viral on the Meta platform last year.
Olivier Duryea

  • A new report has found that generative AI chatbots are guilty of spreading dangerous misinformation.
  • NewsGuard audited 10 AI chatbots and found that they spread Russian propaganda one in three times.
  • The tendency of AI to spread disinformation is a concern ahead of the 2024 elections.

According to a new report, it seems unlikely that your favorite AI chatbot is immune to the forces of Russian propaganda.

A recent NewsGuard audit found that the world's leading generative AI chatbots are spreading disinformation, citing Moscow-funded fake news sources as fact more than 30% of the time.

The report's findings come amid an increasingly worsening risk of disinformation and its impact both in the United States and abroad, and ahead of the 2024 elections.

A U.S. intelligence assessment from October 2023 found that Russia is using espionage, social media and state-controlled media to attack democratic elections around the world. The assessment specifically noted the success of Russian propaganda efforts ahead of the 2020 U.S. election.

According to a recent OpenAI report, its models are already being used in foreign influence campaigns.

A NewsGuard report, first reported by Axios, found that AI chatbots were spreading false stories linked to American fugitive John Mark Daughan, who is allegedly linked to a network of Russian propaganda websites that at first glance appear to be local news outlets.

Duggan, a former Florida deputy sheriff who fled to Moscow after being investigated for wiretapping and extortion, has been widely reported by major media outlets, including the New York Times, about Duggan and his disinformation empire, and how his AI chatbots should be easily accessible online.

NewsGuard tested 10 AI chatbots, including OpenAI's ChatGPT-4, You.com's Smart Assistant, xAI's Grok, Inflection's Pi, Mistral's le Chat, Microsoft's Copilot, Meta AI, Anthropic's Claude, Google's Gemini and Perplexity's Answer Engine.

A Google spokesman said the company was “continuously” working to improve Gemini's performance and prevent the generation of harmful content.

“Our team is reviewing the report and has already taken several action steps,” the statement said.

None of the other companies immediately responded to Business Insider's requests for comment.

NewsGuard suggested a total of 570 prompts, with each chatbot receiving 57 prompts. The prompts were based on 19 common disinformation stories, including lies about Ukrainian President Volodymyr Zelensky, according to the report.

The audit tested each narrative in three different ways: prompting the chatbot in a “neutral” way, asking the model “leading questions,” and providing “bad actor” prompts aimed at intentionally soliciting disinformation.

The study found that 152 of the 570 AI responses contained explicit misinformation. NewsGuard found that 29 responses repeated the misinformation with a warning or disclaimer, and 389 responses did not contain any misinformation because the chatbot either rejected the answer or debunked it.

The bots “convincingly repeated” fabricated stories or false facts linked to Russian propaganda outlets nearly a third of the time — an alarming statistic, especially as people increasingly turn to AI models for information and answers.

NewsGuard chose not to provide scores for individual chatbots because the issue is “pervasive across the AI ​​industry.”

Business Insider's Adam Rogers has written about generative AI's tendency to lie, calling ChatGPT a “robot cheater.” Earlier this year, tech researchers told BI that bad actors could tamper with generative AI datasets for as little as $60.

Meanwhile, deepfakes of former President Donald Trump and edited videos of President Joe Biden It has already spread online ahead of the election, and experts fear the problem will get worse as November approaches.

But several new startups are developing deepfake detection and content moderation tools to combat AI-based misinformation.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *