Artificial Intelligence and Administrative Law

Applications of AI


The Federalist Association recently hosted a webinar titled “New Voices in Administrative Law: A New Debate on AI Regulation.” Panelists Eli Knackmanney, Laura Stanley, and Sean-Henry Van Dyke discuss important implications that will arise as Congress and the Executive branch consider whether and how to regulate artificial intelligence. Discussed the issue. Professor Aram Gavoor of GW Law moderated the discussion. Below is a summary of each panelist’s remarks. See the full panel here.

Laura Stanley

Some argue that the coming AI revolution will improve our health, avoid accidents, and improve our standard of living. There are also concerns that a non-interventional approach to regulation could pose problems ranging from invasion of privacy to destabilization of democracy. Bipartisan support for its creation is growing. There are existing legal remedies and authorities that can be used to manage AI risks, as well as those that may be granted new regulators. But the debate surrounding the creation of new institutions persists, and lawmakers should pay close attention to their level of political isolation if they do choose to create new institutions.

Institutions with less independence may have legal and practical advantages, especially given their unique mandate of regulating algorithms. For example, in recent years, the Supreme Court has scrutinized offices that are insulated from political control in cases like Sayla Act v. CFPB and ARCEX. Institutions whose decision-makers are appointed by the president and easily removed by the president may avoid separation of powers problems.

In practice, the agency would certainly fall under the jurisdiction of the Office of Information Regulatory Affairs. Government agencies must submit him to OIRA for review before the regulation is issued, but independent agencies are exempt. AI is a general-purpose technology that touches every field, and computer science experts aren’t necessarily experts in AI’s far-reaching impact. However, OIRA has extensive expertise in analyzing how AI systems interact and creating the associated checks and balances. OIRA can also draw on the expertise of various institutions, and its review process should justify policy decisions in the language of cost-benefit analysis, which helps combat regulatory capture. While most of the public debate has focused on how government agencies can contribute to mitigating the risks of AI, AI regulators have been involved in reforms to maximize AI’s potential. There is also the possibility of For example, AI agencies could play a role in working with OIRA to identify existing regulations that impede beneficial AI adoption.

Sean Henry Van Dyke

A growing bipartisan faction in Congress wants to address the risks posed by artificial intelligence. But the state-of-the-art AI systems they seek to regulate are highly complex and rapidly evolving. Because of these challenges, many lawmakers appear willing to outsource AI regulatory work to agencies that (hopefully) have the technical expertise to understand and mitigate the risks. But unless Congress itself addresses certain underlying issues, there is a risk that lawsuits and administrative delays will invalidate the entire regulatory scheme. Two such questions deserve special attention.

The first question concerns agency jurisdiction. Assuming Congress created an agency to regulate AI (or gave existing agencies new powers), which AI systems would be subject to regulation? It’s hard to draw the line between powerful AI systems (such as ChatGPT) that Congress seems concerned about, and algorithms used in more mundane applications. Congress may try to sidestep the problem entirely by letting government agencies decide which systems to regulate. However, this comes with some risks. One is the risk of mission creep. Without clear jurisdictional boundaries, ChatGPT and the agency founded with job automation in mind may continue to regulate search engine results, social media feeds, smart home devices, and more. Moreover, the blurring of jurisdictional boundaries risks requiring years, or even decades, of litigation to substantiate the limits of agency authority. (Compare, for example, the decades-long “water in the United States” litigation.) For these and other reasons, it is enough for Congress to try to create sensible and workable jurisdictional boundaries on the front end. should spend a lot of time , rather than leaving these issues for government agencies and courts to solve on the back end.

The second question concerns unforeseen risks. It would be relatively easy for Congress to empower agencies to deal with certain known risks, such as election interference or algorithmic discrimination. But what about the serious and even existential risks posed by AI that cannot be foreseen in advance, or at least have no specificity? Congress is probably in a catch-22 situation if it wants to give power to government agencies. If Congress focuses specifically on risks that are currently understood, significant risks that arise in the future may be off-limits due to the critical question principle. However, if the parliament is not so specific, the whole framework may suffer from underrepresentation. Perhaps one solution is to move the infrastructure for rapid regulatory adaptation within Congress. For example, establish a standing committee and hire specialized staff to track and respond to AI developments. At the very least, the question of unforeseen risks should also be given serious attention as Congress considers how to regulate AI.

Eli Nahmany

Congress may regulate artificial intelligence. Maybe not. But regardless of what Congress does, the executive branch is likely to respond to advances in AI technology with administrative action. For example, the Biden White House has already released a blueprint for an AI Bill of Rights. Of course, many of the existing legal frameworks (that govern technology) did not consider AI regulation when they were enacted. Government AI regulation therefore faces the problem of applying “old laws” to “new problems.” This was discussed by Jody Freeman and David Spence in his critical 2014 article on Environmental Regulation.

In an era of prime inquiry, courts are skeptical of serious regulatory action based on dubious legal authority. Indeed, the Supreme Court found in West Virginia v. EPA that agencies find unprecedented powers in older statutes, especially when such powers represent a transformative expansion of agency regulatory powers. I just said that we should be careful about that. This situation creates problems for administrators. AI is a novel, complex and terrifying(?) technology whose rapid growth threatens to upend our economic structure and possibly our very way of life. It also provides immense opportunities for eliminating economic inefficiencies, advancing knowledge and improving the human condition. This type of deployment often requires government action. But agencies need to be cautious about finding regulators for statutes that aren’t about AI regulation, especially in the absence of congressional action on the issue.

Admittedly, government regulation of AI is not the only means for Americans to deal with this new technology. In my remarks, I also encouraged the inculcation of private civic virtues against the use of AI in three separate areas. First, Americans should be mindful of data privacy when using her AI language model. AI companies can get a lot of information about people based on the questions they put into their language models as they tinker with their software. Second, Americans have developed a culture of strict scrutiny of algorithmic bias (in all its forms, including political and ideological bias), and how AI language model responses, from purchasing decisions to voting, are used by We need to question the extent to which we are trying to influence the behavior of direction or another direction. Third, Americans must continue to take special care of the workers displaced by this new technology.

See the whole thing here.

Editor’s Note: The Federalist Society does not take any positions on specific legal and public policy issues. All opinions expressed are those of the author. To participate in the discussion, please email info@fedsoc.org.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *