video Eric Goldman, a law professor at Santa Clara University, argues that generative AI is destined to be swept up in a regulatory tidal wave.
This is a dire prediction for big tech companies like Amazon, Google, Meta, and Microsoft that have bet big on machine-generated content, but it may not be as bad as smaller companies that are eyeing chatbots and automated content creation.
This is a deluge of regulations and red tape that makers of generative AI models are trying to redirect through initiatives like the recently formed industry consortium focused on AI safety. Participating companies aim to prevent or prevent generative AI from being used to create child sexual abuse images. Otherwise, legal intervention and costs are warranted.
Goldman outlined the coming wave of regulation in the presentation below and an accompanying paper titled “Generative AI is Doomed” last week at Marquette University School of Law in Milwaukee, Wisconsin.
Generative AI is an AI trained on text, audio, or images that generates text, audio, or images in response to explanatory prompts, such as GPT-4, Gemini 1.5, Claude 3, Midjourney, DALL-E, LLaMA 3, etc. refers to a machine learning model that has been created. , and so on. These models are trained on large amounts of other people's content, often without their consent or permission.
There are numerous copyright lawsuits alleging copyright infringement by creators of generative AI models, many of which are still pending and could limit the viability of generative AI.
“Although I did not fully address index copyright litigation in my talk, copyright law remains a potentially significant barrier to the success of generative AI,” Goldman said. register.
“If a copyright holder has a viable claim to Generative AI indexing, it will create an unmanageable rights thicket of millions of rights holders. Rights clearing bodies established in the Sports of Kings problem could partially alleviate the problem, but would only significantly increase costs for the industry (exacerbating the problem in the Sport of Kings problem). To avoid this, creators of generative AI models may try countermeasures that reduce the functionality of their models.”
It should be noted that the term Sport of Kings applies not only to polo, but also to another notoriously expensive pastime, patent litigation, and is an equally appropriate reference in this context .
Speaking of LLM disasters
Last week, Microsoft announced “WizardLM-2, our next-generation state-of-the-art large-scale language model with improved performance for complex chat, multilingual, inference, and agents.”
The open source model family was touted for its performance, but was apparently released without proper safety testing. So the Windows giant either withdrew the model or tried it anyway. This model has already been downloaded many times, so it is still available to the public. Enjoy while supplies last.
But breach risk is not the main focus of Goldman's concerns. He worries that the current hostility toward Big Tech and the resulting regulatory environment has become too hostile for generative AI to flourish. In his paper, he recalled the 1990s, when the Internet was reaching mainstream audiences and the word “tsunami” was used in a more benign sense to evoke the social impact of emerging digital technologies. ing.
“While it may be hard to imagine now, regulators in the 1990s often took a respectful and generally hands-off approach to new technology,” Goldman wrote. “This stance was driven by a general concern that overly aggressive regulatory responses could distort or harm the emergence of this important innovation.”
Goldman wants laws like Section 230 of the Digital Millennium Copyright Act and the Internet Tax Exemption Act that allow the Internet and business to grow and prosper while providing a flexible and balanced structure.
In his view, today's lawmakers are not addressing concerns about generative AI that way.
He cited figures from the Business Software Association to the effect that more than 400 AI-related bills were introduced in state legislatures in the first 38 days of 2024, a sixfold increase. It will come like this.''
“Not all of these bills will pass, but some have already passed and more will pass,” he said. “Regulators are currently ‘flooding the field’ of AI regulation, and each new bill threatens the innovation arc of generative AI.”
Goldman argues that there are many possible reasons why the optimism of the 1990s has faded. First, when the Internet first appeared, there was a lack of public awareness about it. There was relatively little science fiction at the time, especially dystopian depictions. This is not the case with AI, which has been portrayed in books and movies as a malevolent force for decades.
Next, there are the general trends of the times. In the 1990s, with the rise of the Internet and the spread of communication technology, techno-utopianism and cheerleading became popular. Now, there's even more skepticism, what Goldman calls a “tech rush.”
Footage of grenade-dropping drones on battlefields, robocar crashes, warehouse robots stealing jobs, mobile device-based tracking, algorithmic labor oversight, and the extreme wealth of tech billionaires seeking to control public discourse. Considering that, that may not be surprising.
It's an incumbent company's effort to get in the way of its competitors. Many regulators are willing to support these requests even when they are.
Third, Goldman cites the political polarization of today's world and warns that partisan use of generative AI is an existential threat to the technology.
Fourth, he points out the differences between the incumbents then and now. In his opinion, in the 1990s, telecom companies were the dominant players and there was a strong anti-regulation mood. Big tech companies are now pouring money into generative AI, creating barriers to entry and shaping the regulatory environment to gain competitive advantage.
“OpenAI is openly calling for stronger regulation of generative AI,” Goldman wrote. “This action does not prove that such regulation is wise or in the public interest; rather, it may be an effort by incumbents to thwart competitors. Many regulators would be willing to support these demands even when they are implemented.
He added that these big tech companies are likely to adopt license fees as a way to reduce legal risks, which could increase costs and limit competition.
Goldman predicts that regulators will make their presence known in all aspects of generative AI, with few restrictions from existing U.S. laws such as Section 230 and the First Amendment.
“The regulatory frenzy will have a shocking impact that most of us have rarely experienced, especially when it comes to content creation. The regulatory flood will dramatically reshape the generative AI industry. . Even if the industry survives,” he concludes. ®