Whitbrook, Gizmodo’s deputy editor who writes and edits sci-fi stories, said he read the article immediately but had never asked for it or seen it before publication. In an email to Gizmodo editor-in-chief Dan Ackerman, he listed 18 “concerns, corrections, and comments” about the story, suggesting that the bot may be making an appearance on the Star Wars TV series Star Wars: The Clones. Wars” in the wrong order and omitted any mention of it. Television shows such as “Star Wars: Andor” and the 2008 film, also titled “Star Wars: The Clone Wars,” have incorrectly formatted movie titles and narrative headings, leading to repetitive descriptions. and did not include an “explicit disclaimer” that it was written. By AI except for the “Gizmodo Bot” byline.
The article immediately sparked an outcry among employees, according to a message received, which told journalists on their internal Slack messaging system that the erroneous article was “actively harming our reputation and credibility.” He complained that it should be removed immediately because it had “no respect”. By Washington Post. According to his G/O Media staff familiar with the matter, the article was written using a combination of his Google Bard and his ChatGPT. (G/O Media owns several digital media sites, including Gizmodo, Deadspin, The Root, Jezebel, and The Onion.)
“Never of the colleagues I’ve worked with have had to deal with such a basic level of incompetence,” Whitbrook said in an interview. “If these AI [chatbots] I can’t even do something as basic as putting the Star Wars movies in order, I don’t think I can trust it [report] Accurate information of any kind. ”
There’s no denying the irony that Gizmodo, a publication devoted to technology, was in an uproar. On June 29, G/O Media’s editorial director, Meryl Brown, cited his G/O Media’s editorial mission as a reason for adopting AI. G/O Media owns several sites that deal with technology, so he has a responsibility “to develop AI initiatives relatively early in the evolution of technology,” he wrote. increase.
“These features don’t replace the work that writers and editors do today,” Brown said, referring to the company’s trial to test “editorial and technical thinking around the use of AI.” announced to the staff that it would expand “There will be mistakes, but they will be fixed as soon as possible,” he promised.
Gizmodo’s error-ridden test speaks to a larger debate about the role of AI in news. Several reporters and editors said they don’t trust chatbots to produce well-reported and thoroughly fact-checked stories. They worry that business leaders are trying to shove this technology into the newsroom without due diligence. A failed trial, they argue, will not only damage the morale of the employees, but also the store’s reputation.
Artificial intelligence experts said many large language models still have technical flaws, making them unreliable sources for journalism unless humans are deeply involved in the process. They said artificially created news stories, left unchecked, could spread disinformation, create political discord, and have a significant impact on media organizations.
“The danger is to the credibility of the press,” says Nick Diakopoulos, associate professor of communications and computer science at Northwestern University. “If you’re going to publish inaccurate content, I think you’re probably going to take a hit to credibility over time.”
G/O Media spokesman Mark Neschis said the company would be “obsolete” if it didn’t experiment with AI. “We believe the AI trial was a success,” he said in a statement. “We will never reduce the number of editors because of AI activity,” he added, “we are not trying to hide behind anything, we just want to get this right. , you have to embrace trial and error.”
Brown told disgruntled employees in a Slack message seen by the Post on Thursday that the company is “enthusiastic about collecting feedback and acting on it.” . “In searching for the best ways to use technology, better stories, ideas, data, his projects, lists will be suggested,” he said. Screenshots of conversations in Slack show that the note featured 16 thumb emojis, 11 trash can emojis, 6 clown emojis, 2 face palm emojis, and 2 poop emojis.
News organizations struggle with how to use AI chatbots. AI chatbots can now create essays, poems, and stories that are often indistinguishable from human-generated content. Some media sites that have tried to use AI for reporting and writing have been met with disaster. G/O Media seems undaunted.
Earlier this week, Lee Goldman, deputy editorial director at G/O Media, announced that the company had “started limited testing” of AI-generated articles on four sites: AV Club, Deadspin, Gizmodo and The. Notified employees via Slack. Takeout, according to a message seen by the post. “You may notice mistakes. There may be issues with tone or style,” Goldman wrote. “I am aware that you are largely opposed to this warrant and that the respective unions have already considered objections and other issues and will continue to do so.”
Employees immediately responded with messages of concern and skepticism. “Our job description does not include editing or reviewing AI-generated content,” said one of her employees. “If you wanted an article about the order of the Star Wars movies…you could have asked,” said another. “AI is a solution that looks for problems,” said the employee. “We have talented writers who understand what we’re doing. So what you’re doing is effectively wasting everyone’s time.” .”
Several AI-generated articles have been spotted on the company’s site, including a Star Wars article on Gizmodo’s io9 vertical that deals with science fiction-related topics. On sports site Deadspin, AI ‘Deadspin Bot’ wrote an article about the 15 most valuable professional sports franchises with limited team ratings, corrected on July 6th without showing what went wrong was done. The company’s food site, The Takeout, had a “Takeout Bot” with a byline article about “America’s Most Popular Fast-Food Chain Based on Sales”, but didn’t show sales figures. rice field. On July 6, Gizmodo amended its Star Wars article, stating that the episodes were “wrongly ranked.”
Gizmodo’s labor union issued a statement on Twitter condemning the article. “This is unethical and unacceptable.” they wrote. “If you see a byline that ends with ‘Bot’, don’t click on it.” Readers who clicked on the Gizmodo Bot byline itself will be reminded that these “stories were produced with the help of an AI engine.” is announced.
Diakopoulos of Northwestern University said chatbots could produce low-quality articles. The bot trains on data from places like Wikipedia and Reddit and uses that data to help predict the likely next word in a sentence, but there are still technical issues. He said it was difficult to trust reports and writings.
Chatbots sometimes make up facts, omit information, write distorted statements, regurgitate racist or sexist content, and poorly summarize information, he said. or outright fabricate quotes.
He said that when using bots, news outlets need to “edit consistently”, but that can’t be left to one person, and that to make sure content is accurate and compliant, Said content needs to be reviewed multiple times. How to write a media company.
But news researchers say the danger isn’t just threatening the credibility of news outlets. Sites are also starting to use AI to create fake content, accelerating the spread of misinformation and potentially causing political turmoil.
Media watchdog NewsGuard says there are at least 301 AI-generated news sites that “operate without human oversight and publish articles written in large part or entirely by bots,” with a said it spans 13 languages, including Chinese, Chinese and French. They sometimes create false content, such as celebrity death hoaxes or outright fake events, the researchers wrote.
NewsGuard analysts said ad tech companies often place digital ads on their sites “regardless of the nature or quality” of the content, creating an economic incentive to use AI bots to mass-produce them. organizations that use AI for content generation are being encouraged. Post as many articles as possible to host your ad.
Lauren Leffer, Gizmodo reporter and Writers Guild of America member Lauren Leffer, said AI can quickly create articles that generate search and click traffic, so this could be a way for G/O Media to increase advertising revenue. It’s a very transparent initiative,” he said. It can be produced at a fraction of the cost of human reporters.
He added that the lawsuit has demoralized reporters and editors who feel that their concerns about the company’s AI strategy have gone unheeded and that management has not appreciated them. She added that journalists are not immune to mistakes in their stories, but reporters have an incentive to try to limit their mistakes because they are responsible for what they write. This is not the case with chatbots.
Leffer also noted that as of Friday afternoon, the Star Wars story had about 12,000 pageviews on Chartbeat, a tool that tracks news traffic. That’s paltry, she says, compared to the roughly 300,000 page views that human-written articles about NASA have generated in the past 24 hours.
“If you want to run a company that tries to trick people into clicking by mistake, [content]after that [AI] It might be worth your time,” she said. “But if you want to run a media company, why not trust your editorial staff to know what your readers want?”
