- The New York Times and NBC News are among the media companies that have initiated preliminary discussions about potential protections for generative AI systems.
- Digital Content Next, a digital media industry group, this week released seven generative AI principles to guide the discussion.
- “This is the beginning of Hellfire,” Axios CEO Jim Vandehei said in an interview.
People walk in front of the New York Times Building in New York City.
Andrew Burton | Getty Images
Newsroom leaders are preparing for disruption, considering guardrails to protect content from artificial intelligence aggregation and disinformation.
The New York Times and NBC News are in preliminary talks with other media companies, leading technology platforms and Digital Content Next, the industry’s digital news industry organization, to develop rules for how content can be used by natural language artificial intelligence tools. One of the organizations that does according to a person familiar with the matter.
The latest trend, generative AI, responds to complex queries such as “Please write me an earnings report in the style of the poet Robert Frost” or “Please draw a picture of an iPhone by Vincent van Gogh”. Create quirky blocks of text and images. . ”
Some of these generative AI programs, such as Open AI’s ChatGPT and Google’s Bard, are trained on the vast amount of information published on the Internet, such as journalism and copyrighted art. In some cases, the generated material is actually taken almost verbatim from these sources.
Publishers fear that these programs will erode their business models by publishing repurposed content without credit, causing an explosion of inaccurate or misleading content, and undermining trust in online news. I am worried that I may lose it.
Digital Content Next, which represents more than 50 of America’s largest media organizations, including The Washington Post and The Wall Street Journal parent company News Corp, this week unveiled its 7 Principles for Generative AI Development and Governance. bottom. These address issues of safety, intellectual property compensation, transparency, accountability and fairness.
This principle is intended to serve as a pathway for future discussion. It states that “publishers have the right to negotiate the use of their intellectual property and receive fair compensation,” and that “implementers of the GAI system are responsible for the output of the system, not the rules that define the industry.” should be owed”. Digital Content Next shared the principles with its board of directors and relevant committees on Monday.
News outlets fight AI
Digital Content Next’s Principles for Generative AI Development and Governance:
- GAI developers and adopters must respect the rights of creators to their content.
- Publishers have the right to negotiate the use of their intellectual property and receive fair compensation.
- Copyright law protects content creators from unauthorized use of their content.
- The GAI system should be transparent to publishers and users.
- The implementer of the GAI system should be responsible for the output of the system.
- The GAI system must not create or risk the consequences of unfair market or competition.
- GAI systems are secure and should address privacy risks.
Jason Kint, CEO of Digital Content Next, said the urgency of building a system of rules and standards for generative AI is very high.
Kint, who has led Digital Content Next since 2014, said, “During my tenure as CEO, I’ve never seen something emerge from a new problem to dominate so many workstreams. No, we’ve had 15 meetings since February and everyone shares their overall opinion.” all kinds of media. ”
Axios CEO Jim VandeHei said how generative AI plays out in the coming months and years is dominating the media story.
“Four months ago, I wasn’t thinking or talking about AI. Now I’m just talking about AI,” VandeHei said. “If you’re running a company and you’re not into AI, you’re crazy.”
Generative AI brings both potential efficiencies and threats to the news business. This technology enables the creation of new content that benefits consumers and helps reduce costs, such as games, travel lists and recipes.
But the media industry is equally concerned about AI threats. Digital media companies have seen their business models suffer in recent years as social media and search companies, primarily Google and Facebook, have benefited from digital advertising. Vice declared bankruptcy last month, news site BuzzFeed shares have traded below $1 for more than 30 days, and the company received a delisting notice from the Nasdaq Stock Exchange.
Against this backdrop, media leaders such as IAC Chairman Barry Diller and News Corp. to pay a fee.
In his opening remarks at the international conference, Mr Thomson said, “A great many media companies, some of them now fatally under the water, are struggling with their own journalism and their apparent dysfunction.” I am still amazed at the reluctance to advocate for reform of the digital advertising market.” The News Media Association’s World News Media Conference in New York on May 25th.
Diller said at the Semaphore conference in New York in April that sooner or later the news industry would have to band together to demand payment or threaten lawsuits under copyright law.
“What you have to do is get the industry to say that you can’t scrape our content until publishers have a system in place to get some form of payment,” Diller said. “When you actually pick it up, [AI] If you don’t connect your system to a process that has some way to compensate, you lose everything. ”
Beyond balance sheet issues, AI’s most important concern for news organizations is alerting users to what is true and what isn’t.
“Broadly speaking, I While this is optimistic for us as a technology, there are major caveats that this technology poses significant risks to journalism when it comes to verifying the authenticity of content,” said Chris Berend, head of digital at NBC Newsgroups. , added that he expects AI to work alongside humans, rather than replacing them in newsrooms.
There are already signs that AI can spread misinformation. Last month, a verified Twitter account called “Bloomberg Feed” tweeted a fake photo of an explosion at the Pentagon outside Washington, DC. The photo was quickly debunked as a fake, but led to a temporary drop in the stock. More advanced fakes can cause even more confusion and cause unnecessary panic. It can also damage your brand. “Bloomberg Feed” has nothing to do with media company Bloomberg LP.
“This is the beginning of Hellfire,” Vandehey said. “There will be mass proliferation of mass garbage in this country. Is this real or unreal? Add this to a society that is already thinking about what is real and unreal.”
The U.S. government may regulate AI development by big tech companies, but the pace of regulation is likely to lag the speed of technology adoption, Vandehey said.
Mass garbage will proliferate in this country. Is this real or not real? Add this to a society that already thinks about what is real or unreal.
Jim Vandehey
CEO of Axios
Tech companies and news outlets are working to combat potentially disruptive AI, such as the recently invented photo of Pope Francis wearing a giant puffer coat. Google announced last month that it would encode information in a way that users could decipher whether an image was created by AI.
Disney’s ABC News “already has a team working around the clock to check the authenticity of online videos,” said Chris Loft, visual verification coordinator and producer for ABC News.
“Even with text-powered AI tools like ChatGPT and generative AI models, it doesn’t change the fact that we’re already doing this work,” says Looft. “The process is the same, using a combination of reporting and visual techniques to verify the veracity of the video. This means picking up the phone and talking to the witnesses or analyzing the metadata.”
Ironically, one of the earliest uses of AI to replace human labor in the newsroom could be fighting AI itself. NBC News’ Berend said the next few years will see an “AI police AI” arms race as both media and technology companies invest in software that can properly classify and label the real and the fake. predict that will occur.
“Fighting disinformation is one of the powers of computing,” Berend said. “One of the core challenges in content verification he said is the technical challenge.
A combination of rapidly evolving and powerful technology, input from dozens of important companies and US government regulations has led some media executives to privately admit that the coming months could be very disruptive. ing. Today’s era of digital maturity expects us to reach solutions more quickly than in the early days of the Internet.
Disclosure: NBCUniversal is the parent company of NBC Newsgroup, which includes both NBC News and CNBC.
Spotlight: Generative AI Needs to be Regulated