OpenAI Chief Executive Sam Altman said in congressional testimony on May 16 that it was time for regulators to start putting limits on powerful AI systems. “As this technology advances, we understand that people are anxious about how it will change our lives. So are we,” Altman said. spoke at the meeting. “If this technology doesn’t work, it could go very wrong,” he said, arguing it could cause “significant damage to the world.” He agreed with lawmakers that government oversight is important to reduce risk.
A topic that received little attention from lawmakers a year ago, governments around the world are now vehemently debating the pros and cons of regulating or banning some uses of artificial intelligence technology. But the question for business leaders right now is not when and how AI will be regulated, but who it will be regulated by. Whether parliament, the European Commission, China, or the states and courts of the United States take the lead will determine both the speed and trajectory of AI’s transformation of the global economy, protecting some industries and protecting all companies. may limit their ability to interact using technology. Communicate directly with consumers.
The use of generative AI has exploded since the November 2022 release of OpenAI’s ChatGPT, a generative AI chatbot built on self-improving large language model neural networks (LLMs). bottom. According to data compiled by Statista, ChatGPT helped him reach 1 million users in five days, beating out the introduction of other super-fast internet products such as Facebook, Spotify and Netflix. Midjourney and DALL-E, his LLMs who create custom his illustrations based on user input, have exploded in popularity as well, generating millions of images every day. Generative AI definitely meets the criteria for what one of us previously co-defined as the “Big Bang Disruptor.” This is a new technology that provides users with a better and cheaper experience than the competition from the moment of its release. .
Such high-profile coverage is understandably exciting, but also worrisome for incumbents. The possibilities for LLM seem limitless, perhaps revolutionizing everything from search to content generation, customer service to education. Unlike the more targeted Big Bang Disruption, ChatGPT and the rest of his LLMs are very powerful disruptors, breaking long-standing rules not just in one industry, but in every industry. at the same time.
Given the potential scale of this disruption and issues such as privacy, prejudice and even national security, it’s no surprise that lawmakers take note. Remember Goethe’s poem The Sorcerer’s Apprentice, animated in the classic Disney film. fantasia, when the sorcerer returns to the workshop, he finds that the disciple unleashes a power that quickly spirals out of control and threatens to destroy everything in sight until the sorcerer restores order. Many of those concerned about AI’s possible unintended consequences, including developers like Altman, look to lawmakers to play the role of magicians.
everyone is coming
In the United States, multiple players are vying to lead the regulation of AI.
First there is Congress, where Senate Majority Leader Chuck Schumer is calling for preemptive legislation to establish regulatory “guardrails” for AI products and services. The guardrail will focus on user transparency, government reporting, and “aligning these systems with American values to ensure AI developers live up to their promise to create a better world.” I’m putting However, the ambiguity of this proposal is not to be expected.
Second, the Biden administration has seen some competition among federal agencies to implement the White House blueprint for the AI Bill of Rights introduced last October. This blueprint is similarly general, ensuring a “safe and effective” system that does not discriminate against or violate privacy expectations, and explaining when users are using automated systems. and require developers to provide a human “fallback” for requesting users. — we haven’t defined those key terms, at least not yet.
In the Department of Commerce, the National Telecommunications and Information Administration (NTIA) has launched an investigation into the usefulness of auditing and certifying AI systems. The agency commented on dozens of questions about AI system accountability, including when, how and by whom new applications should be evaluated, certified and audited, and what criteria should be included in these reviews. Seeking. Here the idiosyncrasies of the investigation seem to point in the right direction.
Meanwhile, Federal Trade Commission Chairman Rina Khan has taken a different approach, arguing that her agency already has jurisdiction over LLM, and has rebelled against her agency in the direction of new technology. It is shaking the blades of competition and consumer protection. Khan speculates that AI could exacerbate problems that exist in the tech industry, including “collusion, monopolies, mergers, price discrimination and unfair methods of competition.” The FTC chairman also believes generative AI “could accelerate fraud” due to its ability to create deceptive yet compelling content. And by basing responses to user requests on a biased data set, whether intentional or not, LLM may violate existing privacy and anti-discrimination laws, she said. doing.
In addition, states are going even further, with AI-related legislation already in place in at least 17 states. Some of these proposed laws will encourage local development of AI products, while others will limit the use of AI in applications such as healthcare and employment. Many states have or are considering creating their own special committees to recommend future legislation.
So far, these proposals have little concrete content, and the types of hypothetical damage caused by AI fall into existing categories such as misinformation and copyright and trademark abuse. In any case, in the short term, regulators are likely to have little influence over the development of this technology. Many of the proposed regulations would require Congress to grant additional legal powers to government agencies, but that seems unlikely in the current political climate. Even then, applying the new rules would be a court matter, requiring years of grinding work. And governments have historically struggled to muster the technical expertise needed to even define the new kinds of harm LLM and other AI applications can cause.
The Department of Commerce should be commended for asking the right questions in the federal proposal. However, it is unclear whether Secretary Gina Raimond has the legal authority to create a sustainable certification process or the political clout to persuade the tech industry to support the NTIA’s efforts. Moreover, as the department acknowledges, the study is just one part of a larger White House effort to create a trustworthy environment for AI services, a goal that has hitherto been implemented across multiple government departments. It requires an unprecedented level of coordination and cooperation.
These debates also come against the backdrop of major changes in U.S. law that are likely to ultimately determine who wins the role of AI’s primary regulator. Recent Supreme Court decisions have dramatically changed the legal landscape of business law, shifting authority from federal regulators to courts and states, further increasing fragmentation, uncertainty, and delays in enforcement actions. Courts seek to challenge agency rulemaking by, for example, requiring more specific direction from Congress and effectively delegating the final determination of whether adopted rules will take effect to federal judges. I gave the company the go-ahead. Meanwhile, of course, technology continues to evolve at its own accelerated pace.
Taken together, these restrictions suggest that major regulations are likely to be introduced from outside the United States first.
When it comes to competition law, especially its application to technology companies, the momentum of the last few decades has already shifted from the US to Europe. As the EU continues to pass substantial new internet-related legislation, Congress is in limbo, and the FTC and other federal agencies have few tools and resources to compete with their European counterparts. The European Parliament recently preemptively banned applications deemed to pose an “unacceptable” level of risk, requiring prior approval and licensing before use within the EU, and imposing hefty fines on developers100. Approved page law. for various violations.
Chinese regulators are also moving rapidly to encourage homegrown AI products and services and define how they can and cannot operate. Not only could this limit the way a non-Chinese company can interact with his billion-plus potential Chinese users, but by being the first to exist, it would be the de-facto for future applications. legal regime.
What companies should do now
What combination of government actions, including legislation, regulation, and judiciary, really strikes a balance between maximizing the value of AI while minimizing its potential harm to the broader economy and society. It’s not entirely clear whether. As with all innovative technologies, government capacity to effectively regulate LLMs will almost certainly fall short. This is not a criticism of legislators or regulators, but a side effect of the basic fact that while technology evolves exponentially, legislation advances in stages.
In the meantime, business leaders and academics will take cues from the ongoing efforts of the Department of Commerce, a non-governmental regulatory body to identify and provide market incentives for purchasing ethical and trusted AI products and services, Audit , should start developing a certification process. Which applications are trusted and which are not.
Of course, the long history of successful (and unsuccessful) self-regulatory organizations goes back to the Middle Ages, to the merchant “courts” that enforced the norms of medieval markets. Numerous bodies, including the International Organization for Standardization, now develop and certify corporate compliance to a wide variety of standards, best practices and assessments. In the information age, similar efforts are being made in everything from corporate standards to deal with authoritarian regimes to the development of the software and protocols that make up the Internet itself.
Some government regulation is inevitable. Still, the surest way to not irritate a magician would be to avoid causing too much confusion in the first place.
