In late March, the Future of Life Institute published an open letter (and associated FAQ) stating, “Immediately suspend all AI labs from training AI systems stronger than GPT-4 for at least six months. This moratorium must be public, verifiable, and include all key stakeholders.If such a moratorium cannot be implemented immediately, the government may intervene and establish a moratorium. The letter also states that “strong AI systems should only be developed when there is confidence that their effects are positive and the risks are manageable.” was initially signed by more than 1,000 people, including many prominent technology leaders. After publication, thousands more people added their signatures.
Individual companies and universities have the right to decide whether and at what pace they will work on artificial intelligence (AI). But the US government moratorium on training powerful AI systems could raise a number of concerns, including:
Delaying the benefits of AI
It’s already clear that AI will benefit drug development, medical diagnostics, climate modeling and forecasting, education, and many other areas. Moreover, as is often the case with new technologies, large-scale AI systems yield unpredictable benefits.
A nationwide government-mandated work halt on key categories of AI will have the inevitable result of delaying access to the technology’s benefits. For some applications, such as the use of large language models to improve education and expand access to legal services, these delays have problematic consequences.
legally questionable
No US federal or state agency has clear legal authority to issue a moratorium on training large-scale AI systems. For example, the Federal Trade Commission’s (FTC) mission is to “protect the public from deceptive or unfair business practices and unfair methods of competition.” Ironically, the FTC moratorium Hinder It urges companies to avoid competing to develop better AI systems and act in concert to stop (and later resume) the work of training large-scale AI models. Congress also has extensive legislative powers under the Commerce Clause, but that power is also limited.
There are also implications related to the First Amendment, which protects receipt of information, including digital information obtained over the Internet. Of course, as several recent lawsuits filed against companies creating AI image generators have highlighted, when AI models are trained using third-party data, there are complex, unresolved copyright issues. There is a legal issue. But as long as companies can build large data sets in a way that avoids copyright law and contract violations, it’s not enough that the First Amendment grants the right to train with that data. There are valid (unverified) arguments. AI models at scale.
In short, any moratorium, whether by a government agency or by Congress, is immediately challenged in court.
difficult to effectively enforce
Effectively enforcing a moratorium is difficult. The U.S. government clearly has no intention of launching Prohibition-era raids against companies suspected of conducting unlicensed AI training. More generally, governments do not have the human or technical resources to positively verify compliance with national moratoriums. Instead, the moratorium will likely be enforced through a self-reporting process, requiring companies and universities to prove they are not engaging in prohibited AI work. There is no easy way to create a list of companies and universities subject to this certification requirement.
Another problem with enforcement is that unless a whistleblower comes forward, it becomes nearly impossible to detect behavior that violates the moratorium. AI is very different from areas like nuclear weapons development. In nuclear weapons development, it is possible (although not always easy) to track compliance with the moratorium. This is because the materials and technologies involved, such as uranium and nuclear centrifuges, are difficult to obtain and difficult to use. , has very limited uses. The key ingredients of AI systems are data and computing power, both of which are easily accessible and have an essentially endless list of uses that do not violate the moratorium.
Line drawing problem
Yet another concern lies in defining what AI-related work is prohibited. What is the size threshold for AI systems subject to a moratorium? Which metric or set of metrics is considered sufficient to characterize the size of an AI system? Who performs the measurements? Can regulatory language imposing a moratorium on size-specific AI systems be written without creating loopholes that are easily circumvented, and does the moratorium only apply to the actual training of AI systems at scale? , or to the development of related technologies? Will some of them be able to build stronger AI with smaller systems and less training than before?
“What next?” question
Also, a six-month moratorium quickly leads to a lack of consensus on what to do next. As the expiration date neared, some argued that the moratorium should be extended for another six months or more. Others will argue that it should be unlocked entirely. Still others advocate a new and different framework based on revisions to the rules on certain prohibited activities. These uncertainties make it very difficult for companies to make decisions about hiring, R&D investments, and AI-related product plans.
geopolitical implications
Nevertheless, a notable and clear result is that the US moratorium on training the largest AI models has no effect internationally. Governments and companies in other countries will continue to invest in building large-scale AI systems. The advances, know-how, and job creation resulting from that job will put the United States at a disadvantage in AI technology.
In short, while AI holds extraordinary potential, it also creates a new set of risks. No matter what policy the United States adopts, the technology of large-scale AI systems will continue to advance at a global level. It would be legally dubious, unenforceable, and easy for the US government to stay at the forefront of AI, advance cutting-edge technology, and use that knowledge to better identify and mitigate risks. Much better than trying to force the Avoided a nationwide shutdown of work on training large-scale AI systems.