Scarlett Johansson, OpenAI, and the Silence of 'Sky'

Machine Learning


[UPDATED in the second paragraph] Hollywood's top actress Scarlett Johansson recently accused OpenAI of using a voice impersonating her on ChatGPT 4o. The voice, known as Sky, has been turned off, at least for now, and OpenAI claims that the voice is not an imitation of Johansson. A bit of history behind OpenAI's interaction with Johansson raises questions about the veracity of the company's claims, and the complicated road ahead for AI ethics.

After Johansson's allegations became public, The Washington Post reported. The company has received paperwork indicating that a new actor has been hired to voice Skye, and will update this article if more details become available.

Johansson's letter states that she turned down an offer from Sam Altman to hire her to lend her voice to ChatGPT 4o in September of last year. According to the letter, Johansson and her family and friends were surprised when they later heard that OpenAI's Sky resembled her. She also noted that Altman had tweeted a reference to the film “Her,” in which Johansson voiced an interactive chat system.

Johansson's letter said her lawyers had written to OpenAI asking for more details about how it created the Sky audio.

OpenAI pauses Sky posted a description online Explaining how she chose her voice, she insisted that she wasn't trying to imitate Johansson.

Related:How can AI be a tool for workers?

Sky High Stakes

The issue of image rights and likeness management is not new. Even before generative AI (GenAI) shook the world, other controversies were brewing over the alleged use of individuals’ image rights without their consent. For example, actress Lindsay Lohan sued Rockstar Games for modeling a character in the video game Grand Theft Auto V. The lawsuit was ultimately dismissed..

The rise of GenAI has intensified debates around image rights, as technology can mimic or copy the performances or work of real people, and can even be misused to create deepfakes of images, audio, and video to harm individuals or advance political propaganda.

In her letter, Johansson said she looked forward to “the enactment of appropriate legislation” to ensure transparency and protect individual rights.

Controversial The SAG-AFTRA strike in 2023For example, film and television studios had hoped to use GenAI in the future to create background characters based on actors' likenesses, but would only be paid a one-time fee to use their images in perpetuity. The strike ended after SAG-AFTRA (Screen Actors Guild-American Federation of Television and Radio Entertainers) obtained the following concessions: Conditions for the use of AI and digital reproduction.

Related:EU AI law clears final hurdle, becoming global landmark

It's not just about celebrities

The procurement of GenAI and ownership of the content created can be unclear, and is further complicated by differing interpretations of existing laws and potential laws being enacted.

At issue is the application of the right of publicity, which is intended to prevent elements of a person's identity, such as their name, voice or likeness, from being misappropriated for commercial purposes. “The right of publicity was like a haystack in a hurricane decades ago, and it hasn't gotten better,” says Matt Savallé, partner and chairman of commercial agreements at law firm Lowenstein Sandler. “Specifically, in the United States, there is no federal right of publicity law.”

This means that states may interpret the right differently, Savallé says: “Some states recognize the right of publicity through common law or case law. Some recognize it by statute. Some recognize it through both. Some don't recognize it at all.”

And that's before AI enters the conversation. “This is a very subtle, complex and nuanced issue in the law,” he says. “It's probably one of the most complex areas that I get asked about frequently.”

Related:Fake News, Deepfakes: What Every CIO Should Know

Regulating the way forward

While the debate surrounding GenAI is clear and intense, there also seems to be recognition that the technology and its uses aren't going away. In a statement shared with InformationWeek, SAG-AFTRA commented on the issues Johansson raised about OpenAI and Sky: “We share her concerns and fully support her right to seek clarity and transparency regarding the voice used in the development of the Chat GPT-4o appliance, Sky.”

The statement also said the union “strongly supports federal legislation to protect the voices and likenesses of our members, and the public, from unauthorized digital reproduction.” It added, “We are pleased that OpenAI has responded to these concerns by pausing use of Sky, and we look forward to working with OpenAI and other industry stakeholders to enact transparent and resilient protections for us all.”

Joe Jones, director of research and insights at the International Association of Privacy Professionals (IAPP), says the topic is complicated by the many intertwining elements that go into intellectual property and copyright. “When audio is trained from audiovisual content, like a movie or a song, there are many different layers of copyright and rights owners,” Jones says. “The screenwriter, the recordist, the cinematographer, the producer. So there's a whole web of ownership interests.”

The lack of uniform federal laws on rights, privacy, and how GenAI works could lead to a quagmire as AI evolves. Companies looking to extend AI with voice could run into state-level regulations in the process. “Some states in the U.S. treat voice as a biometric data point,” Jones says. Plus, organizations may not want to assume that every data source they have access to is suitable for AI. “There are privacy issues with reusing data for secondary purposes,” he says. “That voice data may have been collected for one purpose and then used to train an algorithm years later. It's not always straightforward to say whether you can use the data for a secondary purpose.”





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *