Artificial intelligence makes it possible to create digital replicas of celebrity voices and styles, allowing viewers to hear beloved voices of the past and interact with them in new ways. However, using someone else’s voice without their consent or permission also raises ethical issues.
It’s not uncommon for technology trends to outstrip the legal frameworks that regulate them. So, with new features and productivity gains, it’s no surprise that the rise of AI raises legal issues around copyright, fair use, and more.
These problems are already occurring in synthetic media in the art/imaging space with tools like MidJourney and DALL-E. The ability to create AI art whose works are directly inspired by artists whose works are in the public domain, such as Monet, Rembrandt, and others, is not legally controversial, but it is possible that works of art are inspired by living artists. It’s a completely different story.
While most of the current controversy surrounding large language model chatbots like ChatGPT focuses on factual accuracy, the problems occurring in image creation can also affect the creation of generated text. prize. Being able to chat with characters based on all the writings of Voltaire and Mark Twain is amazing, but using generative AI to approximate a more modern person makes this ethically questionable.
For example, Walter Cronkite was once considered America’s most trusted voice. In an age of unreliability, does CBS have the legal right to recreate Cronkite’s voice and dialogue patterns based on content produced under its own brand? While there are obvious advantages to learning the voice of a deceased person, duplicating it raises questions about commercial exploitation and appropriation. Like it or not, this is an issue the industry must grapple with.
In October 2022, just before ChatGPT launched, I wrote to Pointer that he could edit the first draft of the article by suggesting style changes. And take an anecdote from the 6th paragraph and use it as a kicker. ‘ Replace ‘Hunter S. Thompson’ with a brand name like ‘The New York Times’ and suddenly you have a ‘living’ company defending its ‘trademark’, and that company can be sued in court if someone says ‘ You can claim to have used “The New York Times”. condition. “When it comes to someone like Barbara Walters, who would stand up to prevent their style from being adopted without fair compensation?
According to a report by the International News Media Association called “News Media and the Dawn of Generative AI,” synthetic media is not currently subject to copyright. AI prompts are not subject to copyright, just like smart Google search queries. But the fair use doctrine will be put to the test. “Nevertheless, there is little doubt that fair use as a principle has paid off at least for its cost. In doing so, it means staying abreast of legal news and best practices to protect both your work and yourself,” says the report.
Some of the work of writers like Ida B. Wells and Mark Twain may be in the public domain, but what if CNN wanted to leverage Anderson Cooper long after he retired? Could the network choose his intellectual property the way Disney could an animated character?
And it’s just a legal concern. Don’t forget to have an audience to consider. How would they make sense of such content? Would their ability to comprehend the news be enhanced if such news were delivered by a “trusted voice”? Or will all of it cleverly light up and destroy the Messenger?
None of this is fancy. it’s already happening. Actor Edward Herrmann, who passed away in 2014, still provides the voice for several recent audiobooks. While this gives fans an opportunity to hear familiar voices and companies an opportunity to keep their talent longer, it also raises ethical concerns about using the voice of a deceased person without permission. .
News brands are already exploring safer legal realms that are inherently personified through chat. Two examples are Bloomberg’s “BloombergGPT” and Skift’s “AskSkift”. These bots are trained on data from their respective organizations to answer financial and travel questions in branded voices. News brands could even license these “brand voices” to other publishers.
Ultimately, artificial intelligence and your own style will intersect in unexpected ways, whether for individuals or organizations. In the same way that a movie is remade with an entire cast of Arnold Schwarzenegger, for example, news and information are the voices and voices that publishers find most effective in extending reach and conveying meaning. It will be shared verbally.
It remains to be seen how fair use and copyright laws will apply to specific cases where AI-generated content mimics particular authors or styles, but these emerging technologies will continue to play a role in the media landscape. No doubt it will continue to provoke interesting debates over intellectual property rights in the midst of change. Forward.
