What our audience wants to disclose in the newsroom about AI use

AI News


News audiences generally want journalists to disclose when they use artificial intelligence at work.

This is a major challenge from a recent survey led by Trusting News, a nonprofit organization that helps news organizations build trust with their audiences.

Trusting News founder Joy Mayer presented his findings at the annual conference of investigators and editors held in New Orleans last month.

“Many people who assume that journalists are already using AI are really confused about what that means and what it looks like,” Mayer said at the conference, citing a study from the Reuters Institute for Journalism Research at Oxford University.

Newsrooms most commonly use AI tools to create story overviews, for transcription and language translation, Mayer said.

The trustworthy news survey was conducted in July and August 2024, with 10 partner news organizations collecting over 6,000 survey responses from viewers.

Partner organizations were 100 days at Appalachia, the American City Business Journal, Houston Landing, Iowa, Iowa Public Radio, Austin, Texas-based KXAN-TV, The Associated Press, The Apsociated, The Texas Tribune, USA Today Network, and The Voice Media Group. Several organizations recruited survey participants directly on their websites via information boxes or pop-ups.

Of respondents, 94% said they wanted to disclose to the newsroom in general when using AI, while 87% said that AI disclosure should include reasons why reporters used AI and 92% wanted to know that humans were involved in reviewing AI-generated information.

As a follow-up to the survey, Trusting News created a promotion for this AI disclosure that allows newsrooms to use AI to tailor their specific needs.

We used it in this story (AI/Tools/Tools Description) To help us (What AI/The Tool did or helped). When using (AI/Tools) we (We've confirmed facts, there's a human check and it's met our ethics/accuracy standards) If you use this (Do more x, go deeper, deliver content on more platforms, etc.. ).

At the IRE meeting, Mayer advocated transparency from the newsroom using AI in the editing process.

“Most evidence of our sincerity is invisible to the audience,” Mayer said. “All of the decisions we had to spend time talking in the newsroom and tap on the back. [saying] “Look at how thoughtful we are, see how responsible we are, see how careful we are to this decision” – unless we talk about it, we can go home at the end of the day, but that doesn't really affect the perception of the people. ”

There are four more takeaways from Mayer's presentation on the IRE.

1. Adjust AI disclosures to your audience.

The details contained in AI disclosures may vary widely from news organization to news organization. Some news organizations may simply disclose their use of AI without providing technical information such as the use of AI tools.

Here is an example of the May 2024 USA Today story:

A June 2024 report from the Reuters Institute suggests that younger, tech-savvy viewers may be more comfortable with more technical disclosures.

Below is an example of more technical disclosures that Mayer shared during his presentation. From SWI, the Swiss Broadcasting Station's online news home:

2. Know that viewers may light up the same disclosures that have been used again and again.

It is important that AI disclosures meet audience needs, but Mayer pointed out that the same AI disclosures may not resonate. “If you disclose in general, if they are always exactly the same, it's noise,” she said.

Mayer pointed to Brazilian tech news outlet Nucleo as an example of a media organization that provides various technical explanations about the use of AI for individual stories.

3. Building trust with your television or online video audience by working with AI disclosure into stories.

Focus Group told Trusting News and the Media Engagement Center at the University of Texas at Austin that television news outlets can build trust with them by providing transparency.

“In general, when you're working on storytelling, you know that transparency is more effective than a sidebar or click-through or a catchphrase at the bottom,” Mayer said. “Where you can work with it, people realize it more and remember it more.”

As an example of an effective way to do this, Mayer cited this New York Times video investigation that discovered Israel dropped bombs in the area that ordered civilians in Gazan safely.

This disclosure is included in the video script.

“The frequency with which the attacks will be released by Hamas is unknown, but visual evidence shows that Israel had dropped a 2,000-pound bomb in areas where civilians are ordered to go.” The Times programmed an artificial intelligence tool to analyze satellite images in South Gaza to search for bomb craters. The AI tool has detected over 1,600 possible craters. We manually reviewed each to eliminate false positives, such as shadows, towers, bomb craters, and more from previous conflicts. We measured the remaining craters and found craters that ranged over 40 feet. Experts say only 2,000 pounds of bombs are usually formed. ”

4. Please note that disclosures may change as AI tools become more accepted.

For now, audiences want to know when journalists are using AI, with a few exceptions.

Mayer suggested that news organizations probably don't need to disclose when reporters use AI tools like Chat GPT as search engines. For example, use Chat GPT instead of Google to investigate information available online.

Disclosure is more necessary when AI tools specifically notify or enhance news organization reports. But that can change over time as audiences begin to understand more broadly how journalists use AI. And the audience is used to using the AI tools themselves.

“As people become more and more comfortable with something or tool being used more widely, the less unusual and different things become, the less necessary it needs to point out that,” Mayer said.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *