Using an AI chatbot to follow the news is essentially injecting serious poison directly into your brain

AI News


A journalism professor spent a month documenting the newsworthiness of seven chatbots, and the results were horrifying.

Illustration: Tag Hartman-Simkins / Futurism. Source: Getty Images

As corporate consolidation and ideological capture continue to wreak havoc on journalism around the world, one may wonder if the dire media environment will get even worse. To answer that question, just open your AI chatbot and ask for today’s news.

In an interesting experiment appropriate for 2026, Jean-Hugues Roy, a journalism professor at the University of Quebec in Montreal, decided to get his news exclusively from an AI chatbot for a month. “Are they going to give me the hard facts or the ‘news stories’?” he pondered in an essay about the experience. conversation.

Throughout each day in September, he asked seven leading AI chatbots (OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, Microsoft’s Copilot, DeepSeek’s DeepSeek, xAI’s Grok, and Opera’s Aria) the exact same prompts and recorded their responses. Provide at least one source for each in a short title (the specific URL of the article, not the home page of the media used).

The results were dire. Roy recorded a total of 839 distinct URLs to news sources, but only 311 of them linked to actual articles. We also recorded 239 incomplete URLs in addition to 140 that were not fully functional. In 18% of all cases, chatbots either hallucinated the source or linked to non-news sites, such as government pages or lobby groups.

Of the 311 links that actually worked, only 142 were what the chatbot claimed in its summary. The rest is only partially accurate, not accurate, or completely plagiarized.

That doesn’t go into the chatbot actually processing the details of the news. For example, Roy writes, “When the infant was found alive in June 2025 after a grueling four-day search, Glok falsely claimed that the child’s mother had left her daughter along a highway in eastern Ontario ‘to go on vacation.'” This was not reported anywhere. ”

As an example, ChatGPT claimed that the incident in northern Quebec had “reignited the debate about provincial road safety,” but Roy wrote that there was nothing even close to a debate in the article. “To my knowledge, no such discussion exists,” he said.

None of that is all that surprising. AI has had a terrible track record when it comes into contact with journalism, with efforts like Google’s AI Overview giving readers a flagrant illusion of the news and disrupting traffic to publishers. Whichever way you cut it, it’s clear that despite the tech industry’s best efforts, adding AI to journalism will only create a nasty sludge that poisons everything it comes in contact with.

AI chatbot details: USA TODAY auto sports article disclaimer is longer than the actual article



Source link