Increasingly complex: How should journalists manage the use of AI in their products?

Applications of AI


Like many sectors of the economy, the news industry is moving at breakneck speed toward a future where artificial intelligence plays a role.

Like many other sectors of the economy, the news industry is moving at breakneck speed toward a future in which artificial intelligence plays a major role, and is grappling with questions such as how much of the technology is being used, what to tell consumers about it, and whether anything can be done for journalists who will be left behind.

Reporters from the independent outlet ProPublica had these issues in mind as they walked the picket line earlier this month. They are taking steps towards a potential strike, believed to be the first such professional action in the news industry, where dealing with AI has become the biggest bottleneck.

Few expect this debate to be the last.

AI has definitely helped journalists, simplifying complex tasks and saving time, especially for data-focused stories. News organizations are using it to scrutinize the Epstein file. AI suggests headlines and summarizes stories. Transcription technology has all but eliminated the need for humans to type interviews. These days, even simple Google searches often involve AI.

However, the rush to find out how AI can help economically challenged industries has resulted in several cases in which publications have admitted they were wrong.

Within the past year, Bloomberg has announced multiple fixes for errors in its AI-generated news summaries. Business Insider and Wired were forced to remove articles written by a fake author named Margaux Blanchard. The Los Angeles Times had problems with AI and opinion pieces. Ars Technica said the publication, which has frequently reported on AI fabricating citations and the risks of overreliance on AI tools, was further embarrassed by the company’s failure to follow its policy of telling readers when the tools are used.

The ProPublica controversy is notable for how it touches on issues that are frequently a source of debate. The union representing ProPublica’s journalists, which is negotiating its first contract with the investigative news agency, says it wants commitments about disclosure and the role of humans in the use of AI similar to those sought elsewhere in the industry.

Jen Sheehan, a spokeswoman for the New York Guild, a union that represents many of the city’s journalists, said union members overwhelmingly pledged not only to picket information but also to strike unless there is a satisfactory agreement.

“If you think about the trajectory of AI and journalism, this feels pretty monumental,” said Alex Mahadevan, an expert on the subject at the Poynter Institute, a journalism think tank.

According to the union, ProPublica rejected that request. You can find out why in this month’s widely circulated essay, “Something Big is Happening.” Author and investor Matt Schumer, who spent six years building an AI startup, writes that the technology is moving so fast that “if you hadn’t been experimenting with AI in recent months, you wouldn’t recognize what exists today.”

Media outlets reluctant to document policy

It’s no wonder, then, that news organizations are reluctant to put guarantees in writing that can quickly become obsolete.

Rather than make promises that it can’t keep, ProPublica is exploring ways to use technology to create more space for investigative reporting, said company spokesman Tyson Evans. In response to the “unlikely event” of AI-related layoffs, ProPublica is proposing expanded severance packages for affected employees, he said.

“We approach AI with both curiosity and skepticism,” Evans said. “It’s a mistake to freeze editorial decisions in contracts that last for years.”

Fifty-seven of the 283 contracts with U.S. news organizations negotiated by NewsGuild USA include language related to artificial intelligence, said John Schleuss, president of the union that represents the largest number of journalists in the country. The first such deal took place in 2023, and The Associated Press was one of the pioneers. He wants to include the clause in more contracts.

Judging by the reluctance of many media outlets to be tied down, it won’t be easy. Trusting News, an organization that encourages news organizations to develop and publish policies on the use of AI, estimates that less than half of U.S. news organizations do so.

“I think it’s getting harder and harder,” Schleuss said. “Because too many newsrooms are run by the greedy side of the organization, not the journalistic side of the organization.”

The guild is seeking a contract that guarantees no jobs will be lost due to AI. That’s not surprising. Labor unions exist to protect jobs. Schleuth characterized the proposal as ensuring that actual journalists are involved in using AI as a way to prevent mistakes and help news organizations build trust with their readers.

“Humans are actually much better at going out and finding stories, whether it’s news stories or videos, interviewing sources, bringing back relevant stories, asking difficult follow-up questions, and putting it together in a way that people can understand and see,” he said. “Humans are much better at that than AI.”

Apparently, not everyone in journalism agrees. Chris Quinn, editor of The Plain Dealer in Cleveland, Ohio, wrote earlier this month of his disgust at a college graduate who turned down a job offer because he had been taught that AI would harm journalism.

Quinn’s newspaper sends out several reporters to interview people, collect quotes and information, and enter it into a computer to write an article. While humans edit what the computers spit out, they are taking an essential part of the process out of their hands: the reporters who make their own decisions about how to tell the story. Quinn defended it as the best use of limited resources.

“Catch-22” in public attitudes toward AI disclosure

Benjamin Toff, director of the Minnesota Center for Journalism at the University of Minnesota, said the survey found that a majority of U.S. consumers think it is very important for newsrooms to tell the public when AI is used in writing or photo editing. But here’s the problem. This kind of disclosure makes them trust the media article more, not less.

A survey conducted by TOF last year found that a sizable minority (30%) did not want AI to be used in journalism at all.

Telling your readers that AI was used is not as easy as you might think. Poynter’s Mahadevan said: “There is so much use of AI in journalism, from the beginning of the reporting process all the way to publication, that just making a sweeping declaration that if AI is used in the reporting process it has to be made public seems to be doing readers a real disservice in some cases.”

Two lawmakers in New York, the nation’s publishing capital, introduced a bill this month that would require clear disclaimers when artificial intelligence is used in published content. There is currently no word on whether it will pass, but both sponsors are Democrats in the Democratic-controlled Congress.

Mahadevan believes it’s fair to have policies that require human involvement, such as editing to prevent gaffes. However, he said these declarations are also open to interpretation. If an outlet uses a chatbot to answer reader questions, are the edits being done by humans?

“Realistically speaking, newsrooms of the future will look very different than they do today,” he says. “That means people will lose their jobs. New jobs will be created. So I think it’s important to have these conversations now because audiences don’t want newsrooms to be completely taken over by AI.”

___

David Bauder writes about the intersection of media and entertainment for The Associated Press. please follow him http://x.com/dbauder https://bsky.app/profile/dbauder.bsky.social.

Copyright © 2026 Associated Press. Unauthorized reproduction is prohibited. This material may not be published, broadcast, written or redistributed.





Source link