Deepfake AI regulation is a tightrope for Congress

AI News


U.S. lawmakers need to strike the right balance between regulating the use of tools such as generative AI while preserving free speech protections guaranteed by the First Amendment.

That's according to witnesses who discussed the proposed AI-generated audio and video replica bill at a hearing on Tuesday. Issues with deepfake AI (AI used to create realistic but deceptive audio and video of an individual) have intensified in recent years. From voice calls impersonating President Joe Biden to AI-generated songs imitating artists like Beyoncé and Rihanna, Sen. Chris Coons (D-Delaware) says the use of such tools is , said it raises pressing legal questions that need answers.

“These issues are not theoretical,” Coons said during a hearing held by the Senate Judiciary Intellectual Property Subcommittee. “As AI tools become increasingly sophisticated, it has become easier to reproduce and distribute false images of someone without their consent, including a faked voice or a faked likeness. We can't leave this issue unresolved and we can't do anything about it. It's an option.”

In fact, US federal enforcement agencies, the US Congress, and the European Union have focused on the use of generative AI to create fake videos, audio, and photos of individuals. In January, members of the U.S. House of Representatives proposed a bipartisan bill to address this issue, the Artificial Intelligence False Replicas and Unauthorized Reproduction (No AI Fraud) Act. States are also advancing deepfake AI laws, such as Tennessee's Voice and Visual Security Act (ELVIS).

In October, a bipartisan group of U.S. senators released a draft document aimed at taking aim at generative AI and protecting human voice and visual likeness from unauthorized recreation, “Grow Originals, Grow Art, Grow Entertainment.'' He proposed the “Keep Safe (Prohibition of Fake) Act''. The NO FAKES Act also includes language that would hold platforms like Meta's Facebook and Instagram liable for hosting unauthorized digital replicas.

While some experts support specific laws to regulate deepfake AI, existing laws cover illegal uses of the technology and caution against overly broad rules that could stifle innovation. Some experts think it is.

Stakeholders testify about AI regulations

Warner Music Group CEO Robert Kinkle testified at the hearing that deepfake AI poses a threat to a person's voice and likeness and needs to be regulated.

He warned that the technology would impact the entire world, including business leaders, whose images and sounds could be manipulated to the detriment of business relationships.

“Untethered deepfake technology has the potential to impact everyone,” Kinkle said.

He said bills like the NO FAKES Act would require legally enforceable intellectual property rights to an individual's likeness and voice, as well as AI model builders who intentionally violate an individual's intellectual property rights. and effective deterrence against digital platforms.

Kyncl added that while some people argue that responsible AI threatens free speech, he doesn't think so.

“AI can put words in your mouth, and AI can make you say things you don't say or believe,” Kinkle said. “That's not free speech.”

I stand before you today because you have the power to protect artists and their work from the risks of exploitation and theft inherent in this technology if it remains unchecked.

FKA twigs

Music artist and performer Talia Debrett Barnett, known as FKA Twigs, also testified in support of the bill. She said Congress needs to enact laws to prevent the misuse of artists' work.

“I stand before you today because you have the power to protect artists and their work from the risks of exploitation and theft inherent in this technology if it remains unchecked.” she said.

Ben Scheffner, senior vice president and associate general counsel for legal policy at the Motion Picture Association of America, said that while the NO FAKES Act is a “thoughtful contribution” to the debate about how to establish guardrails against the misuse of technology, AI-related legislation testified that it was. The generated content includes speech restrictions, which the First Amendment makes “severely restrictive.”

“Very careful drafting will be required to achieve the bill's objectives without inadvertently chilling or prohibiting the use of lawful, constitutionally protected technology to enhance storytelling. It will be,” he said. “This is a technology that is fully protected by the First Amendment and has completely legal uses that do not require the consent of the people depicted.”

Additionally, Scheffner said it's important for Congress to pause and ask whether the harms it seeks to address are already covered by existing laws against defamation and fraud. He said if there are gaps in these laws in specific areas, such as election-related deepfakes, the best answer is “narrow, specific legislation that targets that specific issue.”

Lisa Ramsey, a law professor at the University of San Diego School of Law, agreed with Scheffner and testified that anti-counterfeiting laws are “overbroad and vague” and inconsistent with First Amendment protections. . But he said the bill could be amended to address those concerns without unnecessarily suppressing protected speech.

Deepfake AI will be subject to national and global surveillance

Congress is not alone in tackling this issue. In February, the Federal Communications Commission made AI-generated audio in robocalls illegal. Additionally, the Federal Trade Commission is seeking public comment on a proposed rulemaking that would prohibit personal impersonation, according to a news release.

The FTC said in a statement that it is taking action after a spike in complaints and public backlash regarding identity fraud. The FTC said emerging technologies such as AI-generated deepfakes are further exacerbating the problem. The FTC's proposed rulemaking calls for AI platforms that create images, video, or text to provide services that “know or have reason to know are being used to harm consumers through impersonation” It is also being considered whether regulations should declare this illegal.

While it is important to remove unauthorized content generated by AI and prevent deceptive practices, how do existing rules, regulations, and laws that prohibit illegal activities continue to apply to AI? It's also important to consider, said Linda Moore, president and CEO of TechNet, a senior technology network. Executives aiming to promote innovation said in a statement.

Moore said the FTC's proposed rules are too broad and could lead to unintended consequences that would hinder enforcement of existing law and AI innovation.

“More tailored rules will more effectively prevent individual impersonation, allow innovation to flourish, and encourage companies to implement strong compliance programs,” she said in a statement.

The European Union is also working on this issue. The European Commission, the EU's executive arm, this week launched formal proceedings to assess whether Meta has breached the Digital Services Act with its practices and policies regarding political disinformation.

Makenzie Holland is a senior news writer covering big technology and federal regulation. Prior to joining TechTarget Editorial, she was a public relations reporter. wilmington star news crime and education reporter wabash plain dealer.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *