Elon Musk's X social media platform's decision to register artificial intelligence chatbots to create FactChecks is at risk of increasing the promotion of “lie and conspiracy theories,” the former UK technology minister warns.
Damian Collins accused Musk of “leading the bots to edit the news” after announcing on Tuesday that X would allow X to clarify or modify the community notes before authorizing the publication by users. Notes have been written by humans before.
X said he will use AI to write FactChecking notes. This is under some X posts – “We are on the cutting edge in improving information quality on the Internet.”
Keith Coleman, Vice President of Products at X, said that the notes will only be displayed if humans review AI-generated notes and people with different perspectives deem them useful.
“We designed the pilot to be human-assisted AI. “We believe it can provide both high quality and high reliability. We also published a paper with the launch of the pilot. This, co-authored with professors and researchers at MIT, Washington University Harvard University, and Stanford, explains why this combination of AI and human is such a promising direction.”
However, Collins said the system is already open to abuse, and AI agents working on community notes could enable “industrial manipulation that people see and trust” on a platform with around 600 million users.
This is the latest pushback to human fact checkers by US tech companies. Last month, Google said that fact checks created by users, including specialized fact checks, will be stripped of search results. Such a check “no longer provides significant additional value to its users.” In January, Meta announced that it would discard human fact checkers in the United States and adopt its own community notes system on Instagram, Facebook and threads.
An X research paper, which outlines the new fact check system, frequently and slowly limits specialized fact checks on scale and states that they “lack the trust of the majority of the masses.”
A community note created by AI states that “fast production, little effort to generate, and high quality potential.” Human and AI writing notes will be submitted to the same pool, and X users should be the most useful and appear on the platform.
AI will draft a “summary of neutral evidence,” the research paper said. Trust in community notes: “He said he trusts him, not “Who drafts the notes, but from the people who appreciate them.”
But Andy Dudfield, head of full fact AI for the UK fact checking organization, said: “These plans will add to the already significant burden for human reviewers to check more notes, opening the door to concerns and reasonable circumstances that will be fully exposed by AI without the need to draft, review and carefully consider.”
Samuel Stockwell, a researcher at the Alan Turing Institute's Emerging Technology Security Center, said: “AI helps fact checkers handle the massive amounts of claims that flow daily through social media, but it depends on the quality of X, which relies on the risk that these ai 'nete writers' will embrace the appeal of nu and can be misleading in false and false ai ai flucts. Even if it's not true, the sound confidently offers compelling answers.
Researchers have found that people perceive human-written community notes as much more reliable than a simple, misinformation flag.
An analysis of hundreds of misleading posts about X up until last year's presidential election shows that in three-quarters of cases, accurate community notes are not displayed and users do not support them. These misleading posts include allegations that Democrats imported illegal voters and that the 2020 presidential election was stolen, according to the Digital Hate Center.
