- Now is the time to start crafting new laws around generative AI, technology law experts told BI.
- Some warn that a new “dark age” could emerge if the industry remains largely unregulated.
- Currently, there is no uniform federal law addressing the use of AI in the United States.
The dangers of generative artificial intelligence are already becoming apparent, and now is the time to start crafting new laws and regulations for the rapidly evolving technology, technology law experts told Business Insider.
One legal expert has warned that if the relatively new AI industry is left largely unregulated, it could usher in a new “dark age” of our time — a period of societal decline.
“If this is allowed to run wild, without regulation and without compensation for those who are using it, it's basically a new dark age,” said Frank Pasquale, a law professor at Cornell Tech and Cornell Law School.
“This is a harbinger of a new dark age, or perhaps a complete evisceration of the incentives for producing knowledge in many fields, and I find that very worrying,” he added.
As AI tools such as OpenAI's ChatGPT and Google's Gemini grow in popularity, experts say that social media's largely unregulated nature after nearly three decades should offer lessons for AI.
The main issue that emerged is the use of copyrighted works in technology training.
Authors, visual artists, media outlets and computer programmers have already filed lawsuits against AI companies such as Microsoft-backed ChatGPT maker OpenAI, alleging that their original work was used to train AI tools without their permission.
While there is no uniform federal law governing the use of AI in the United States, some states have already passed their own laws regarding the use of AI, and Congress is also exploring ways to regulate the technology.
Pasquale said AI regulation could prevent many of the problems that could fuel the so-called new Dark Ages.
“Continued free and unlimited expropriation of copyrighted works will likely further demoralize many creators and ultimately cause them to lose funding as AI unfairly outcompetes or effectively overpowers them,” Pasquale said.
Pasquale said many will view low-cost automated content as a “gift of abundance” until it becomes clear that AI itself relies on the ongoing input of human-created work to improve and remain relevant in a changing world.
“At that point, it may be too late to revitalize neglected and languishing creative industries,” he said.
Mark Bartholomew, a law professor at the University at Buffalo, also worries that AI could one day “generate so much content, from artwork to advertising copy to TikTok videos, that it dwarfs what real people are posting,” but for now he says he's more worried about AI being used to spread misinformation, create political or pornographic deepfakes, and commit fraud.
“The danger is now high enough to bring in regulations.”
Bartholomew warned that without comprehensive AI regulation soon, we could face elections rife with misinformation, the proliferation of deepfakes, and fraudsters using AI to impersonate other people's voices.
“It's dangerous to say, in 2024, we know exactly what to do with AI,” Bartholomew said, adding that introducing too much regulation prematurely could stifle “promising new technologies like AI.”
But, he added, “My personal view is that the danger is now high enough that we need to step in and at least put in place concrete regulations to address what we already know to be a real problem.”
“Even if we pass laws banning the use of AI for political deepfakes, it's not going to shrink and disappear,” Bartholomew said.
US intellectual property laws regarding copyright infringement and state-level rights of publicity are among the main legal frameworks potentially being used to regulate AI in the US.
Harry Thuden, a law professor at the University of Colorado Law School, agreed that new federal laws should be enacted to specifically regulate AI, but warned against doing so too hastily.
“We're really bad at predicting how these technologies will emerge and what the problems will be,” said Thuden, who is also vice director of Stanford University's CodeX Legal Resource Center. “We don't want to do this in a rushed or political or ad-hoc manner.”
“It could end up hurting all the good stuff as well as the bad,” he said.
Both Bartholomew and Pasquale argued that the lack of regulation around social media, and the way lawmakers have generally disregarded it since its inception, should serve as a lesson in how to deal with AI.
“This is a lesson learned,” Bartholomew said. “We waited too long to use social media, and it caused some serious problems.”
And, he said, we still haven't found the political will to do anything about this issue.
Pasquale added that when social media first emerged, people didn't really anticipate “how much it could be exploited and weaponized by bad actors.”
“There is precedent for regulation on social media and it should be done sooner rather than later,” Pasquale said.
Thuden argued that early debates about regulating social media “largely failed to anticipate other major issues about social media that concern us today, and that many today consider to be even more important.”
This includes the impact of social media on young people's mental health and the spread of false and misinformation, he said.
He noted that while the capacity to regulate social media currently exists, it is not clear what effective legal solutions there would be to the social problems that arise.
“We as a society are not as good at predicting problems in advance as we like to think we are,” Thuden said.
“There are similar lessons for AI: We clearly have today's issues that need our attention, like privacy, bias, and accuracy,” Thuden said, “but we should be humble about our ability to anticipate problems and preemptively regulate AI technologies, because we are often very bad at predicting the details and their impact on society.”
