YouTube lawsuit in Supreme Court could shape ChatGPT and AI protections

AI Video & Visuals


WASHINGTON (Reuters) – When the U.S. Supreme Court decides whether to weaken the powerful shields protecting internet companies in the coming months, the ruling will weigh in on rapidly developing technologies like the artificial intelligence chatbot ChatGPT. may also affect

A judge will decide by the end of June whether Alphabet Inc’s (GOOGL.O) YouTube can be sued for recommending videos to users. The case is whether US laws protecting technology platforms from liability for content posted online by users apply when companies use algorithms to target users with recommendations. test.

Court decisions on these issues are important beyond social media platforms. The ruling has targeted companies developing generative AI chatbots, such as OpenAI’s ChatGPT, of which Microsoft Corp (MSFT.O) is a major investor, and companies developing generative AI chatbots, such as Alphabet’s Google’s Bard. should be protected against legal claims such as defamation and libel. An invasion of privacy, according to technology and legal experts.

This is because the algorithms powering generative AI tools such as ChatGPT and its successor GPT-4 work in a somewhat similar way to those that suggest videos to YouTube users, the experts added.

Cameron Kelly, a visiting fellow and AI expert at the Brookings Institution, a Washington think tank, said, “Organizing the information available online through recommendation engines is critical to shaping content. “We have the same kind of problem with chatbots.”

Representatives from OpenAI and Google did not respond to requests for comment.

During a debate in February, a Supreme Court judge expressed uncertainty about whether to weaken the protections provided by the statute known as Section 230 of the Communications Decency Act of 1996. Although the lawsuit does not directly relate to generative AI, Judge Neil Gorsuch said that tools that generate “poetry” or “controversy” would not receive such legal protection.

This case is just one aspect of the emerging debate about whether the Section 230 immunity should apply to AI models that have been trained on existing piles of online data, but can still produce original works.

Section 230 protections generally apply to third-party content from users of technology platforms and not to information that companies help develop. The court has yet to consider whether responses from AI chatbots are covered.

“The Consequence of Their Own Actions”

Democratic Senator Ron Wyden, who helped draft the bill while in the House, said the liability shield should not apply to generative AI tools because they are tools that “create content.” rice field.

“Section 230 is about protecting users and sites for hosting and organizing user speech. It shouldn’t protect companies from the consequences of their own actions or products,” Weiden told Reuters. said in a statement to

The tech industry has pushed to keep Section 230 despite bipartisan opposition to the exemption. According to them, tools like ChatGPT act like search engines, directing users to existing content in response to queries.

“AI isn’t really creating anything. It’s transforming existing content into another way or another format,” says Carl, vice president and general counsel at NetChoice, a trade group for the tech industry. Szabo said.

Szabo said weakening Section 230 presents an impossible task for AI developers and threatens to expose them to a flood of lawsuits that could stifle innovation.

Some experts predict that courts may compromise by looking at situations in which AI models generated potentially harmful reactions.

Shields may apply even if the AI ​​model appears to restate existing sources. However, chatbots like ChatGPT are known to create fictitious responses that appear to have nothing to do with information found elsewhere online.

Hany Farid, an engineer and professor at the University of California, Berkeley, says it’s unimaginable to argue that AI developers should be immune from lawsuits over the models they “programmed, trained and deployed.” Stated.

“Companies produce safer products when they are held liable in civil lawsuits for damage caused by the products they produce,” Farid said. “And when they are not held accountable, they produce less safe products.”

The Supreme Court-decided case includes an appeal by the family of Nohemi Gonzalez, a 23-year-old college student from California who was shot dead in a 2015 Islamic extremist rampage in Paris, and a lower court has ruled that seeking dismissal of her family. Lawsuit against YouTube.

The lawsuit accuses Google of providing “material support” to terrorism, and YouTube, through its video-sharing platform’s algorithms, illegally directing certain users to videos by the extremist group IS, which claims responsibility for the Paris attacks. Claimed to recommend.

Reporting by Andrew Goudsward.Edited by Will Dunham

Our standards: Thomson Reuters Trust Principles.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *