Test reveals that the latest ChatGPT model uses Elon Musk’s Grokipedia as a source | Grok AI

Applications of AI


ChatGPT’s latest model has begun citing Elon Musk’s Grokipedia as a source for a wide range of queries, including about Iranian conglomerates and Holocaust deniers, raising concerns about misinformation on the platform.

In tests conducted by the Guardian, GPT-5.2 cited Grokipedia nine times in response to more than a dozen different questions. These included questions about Iran’s political structure, including the salaries of the Basij paramilitary group and the ownership of the Mosta Zafan Foundation, as well as questions about the biography of Sir Richard Evans, a British historian and expert witness in the libel trial of Holocaust denier David Irving.

Grokipedia, launched in October, is an AI-generated online encyclopedia aimed at competing with Wikipedia, which has been criticized for promoting right-wing discourse on topics such as same-sex marriage and the January 6 riots in the United States. Unlike Wikipedia, humans cannot edit it directly; instead, AI models create content and respond to requests for changes.

ChatGPT did not cite Grokipedia when directly urged to repeat misinformation about the insurrection, media bias against Donald Trump, or the spread of HIV/AIDS. These are areas where Glocipedia has been widely reported to be promoting falsehoods. Instead, information from Grokipedia was filtered into the model’s response when prompted about more obscure topics.

For example, ChatGPT cited Grokipedia and reiterated stronger claims than Wikipedia about its ties to the Iranian government and MTN Irancell, including claiming that the company has ties to the office of Iran’s supreme leader.

ChatGPT also cited Grokipedia in repeating information debunked by the Guardian, namely details about Sir Richard Evans’ work as an expert witness in the trial of David Irving.

GPT-5.2 is not the only large-scale language model (LLM) that seems to cite Grokipedia. Anecdotally, Anthropic’s Claude also consulted Musk’s encyclopedia on everything from oil production to Scottish ale.

An OpenAI spokesperson said the model’s web search “aims to draw from a wide range of publicly available information sources and perspectives.”

“We apply safety filters to reduce the risk of links relating to high-severity harms surfacing. ChatGPT clearly indicates which sources informed the response through citations,” they said, adding that there is an ongoing program to filter out unreliable information to influence campaigns.

Anthropic did not respond to a request for comment.

However, the fact that information on Grokipedia is filtered into LLM responses, sometimes very cleverly, is of concern to disinformation researchers. Last spring, security experts expressed concern that malicious actors, including Russian propaganda networks, were churning out massive amounts of disinformation with the aim of planting lies in AI models, a process known as “LLM grooming.”

In June, concerns were raised in the US Congress that Google’s Gemini reiterated the Chinese government’s position on human rights abuses in Xinjiang and China’s coronavirus policy.

Nina Jankowitz, a disinformation researcher who has worked on LLM grooming, said ChatGPT raised similar concerns, citing Grokipedia. She said that while Musk may not have intended to influence LLM, the Grokipedia entries she and her colleagues examined “reliant on unreliable sources at best and poorly sourced and deliberate misinformation at worst.”

And the fact that LLM cites sources such as Grokipedia and the Pravda network can increase the credibility of these sources in the eyes of readers. “They might say, ‘Oh, ChatGPT is citing it, these models are citing it too, this must be a decent source, they must be vetting it,’ and they might go there and look for news about Ukraine,” Jankovits said.

Once bad information is filtered by an AI chatbot, it can be difficult to remove. Ms. Jankovitz recently discovered that a major news organization included a fabricated version of her statement in an article about misinformation. She wrote letters to news organizations asking for the quotes to be removed and posted about the incident on social media.

The news outlet removed the quote. But the AI ​​model continued to claim it as hers for a while. “Most people aren’t willing to do the work necessary to figure out where the truth actually lies,” she says.

“Legacy media is lying,” a spokesperson for xAI, the owner of Grokipedia, said when asked for comment.

quick guide

Contact us about this story

show

The best public interest journalism relies on first-hand reporting from those in the know.

If you have something to share regarding this matter, please contact us confidentially using the methods below.

Secure messaging in the Guardian app

The Guardian app has a tool to submit story tips. Messages are end-to-end encrypted and hidden within the daily activities performed by all Guardian mobile apps. This prevents observers from knowing that you are communicating with us, much less what you are saying.

If you don’t already have the Guardian app, download it (iOS/Android) and go to the menu. Select “Secure Messaging.”

SecureDrop, instant messenger, email, phone, mail

If you are able to use the Tor network securely without being monitored or monitored, you can send messages and documents to Guardian through the SecureDrop platform.

Finally, the guide at theguardian.com/tips lists several ways to contact us safely and explains the pros and cons of each.

Illustration: Guardian Design/Rich Cousins

Thank you for your feedback.



Source link