AI-focused COPIED bill would make it illegal to remove digital watermarks

AI News


A bipartisan group of senators has introduced new legislation to make it easier to authenticate and detect content generated by artificial intelligence, and to prevent the work of journalists and artists from being swallowed by AI models without their permission.

The Content Provenance Protection and Integrity from Editorial and Deepfake Media Act (COPIED Act) would direct the National Institute of Standards and Technology (NIST) to develop standards and guidelines to help prove the origin of content and detect synthetic content, such as through watermarking. It would also direct the agency to create security measures to prevent tampering and require AI tools for creative or journalistic content to allow users to attach information about provenance and not be able to remove that information. Under the bill, such content also cannot be used to train AI models.

Content owners, such as broadcasters, artists, and newspapers, could sue companies they believe have used their material without permission or tampered with authentication markers. State attorneys general and the Federal Trade Commission could also enforce the bill. Supporters of the bill argue that it prohibits anyone from “removing, disabling, or altering the provenance information of content,” with exceptions for security research purposes.

This is the latest in a series of AI-related bills as the Senate begins to understand and regulate AI technology. Senate Majority Leader Chuck Schumer (D-NY) led the Senate's effort to create an AI roadmap but made it clear that new legislation would be developed in separate committees. The COPIED Act has the advantage of being sponsored by a powerful committee leader, Senate Commerce Committee Chair Maria Cantwell (D-WA). Senate AI Working Group member Martin Heinrich (D-NM) and Commerce Committee member Marsha Blackburn (R-TN) are also leading the bill.

Several publishing and artist groups, including SAG-AFTRA, the Recording Industry Association of America, the News/Media Alliance and the Artists' Rights Alliance, issued statements praising the bill's introduction.

“AI's ability to create stunningly accurate digital images of actors is a real and immediate threat to our members' economic, reputational well-being and self-determination,” SAG-AFTRA national executive director and chief negotiator Duncan Crabtree Ireland said in a statement. “To protect everyone's fundamental right to control the use of their face, voice and persona, we need a fully transparent and accountable supply chain for generative AI and the content it creates.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *