Should governments be allowed to regulate AI?

Machine Learning


Governments regulate many things. The list is almost endless. Radio frequency allocation, firearms, property use, pharmaceuticals, industrial emissions, nuclear power generation, and the list goes on. In the United States, government leaders have even talked about regulating food package sizes. So, will AI be next on the regulatory list? Expert opinions vary.

Rebecca Engraff, co-chair of the artificial intelligence, machine learning and robotics practice at business law firm Perkins Coy, discusses the risks associated with using AI and the twin goals of expanding AI's availability and reach. We believe it is fair to consider how government regulation can most effectively serve the public. For the betterment of society. “That said, the first question should always be whether there are existing laws or regulations that can address specific concerns about AI,” she said in her email interview.

However, measures are already being taken. On March 13, the European Parliament approved the Artificial Intelligence Act. The law takes a risk-based approach that ensures companies release legally compliant products before they are released to the public. The next day, under separate legislation, the European Commission required Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, YouTube and X to demonstrate how they limit the risks of generated AI.

Related:Biden administration cracks down on government agencies' use of AI

Hands-off or on?

Regardless of whether AI needs regulation, laws don't work well if they're created to cover only one technology, Engrav says. “This type of exceptionalism results in laws becoming less formal and less coordinated over time, especially given how infrequently they are updated in the United States.”

Anand S. Rao, a professor of AI at Carnegie Mellon University's Heinz School of Information Systems and Public Policy, believes government regulation will ultimately be needed to control the misuse of AI. “Imagine AI without any government oversight,” he says. “It's like a car speeding down the road without brakes or steering. It can be confusing and dangerous,” he explained in an email. “Government intervention can guide the trajectory of AI toward social good and ensure that its development is aligned with the public interest and ethical standards.”

Unregulated AI could amplify existing biases, cause physical and psychological harm, and undermine public trust in technology, Rao warns. “The challenge for governments is how to walk the fine line between fostering innovation and preventing harm.” Striking this balance requires that regulatory measures do not impede technological progress or create economic penalties. We need to make sure we don't make equality worse, he explains.

Related:EU AI law passed: How can CIOs prepare?

Engraff said new government regulation of AI makes sense for relatively persistent, substantial, important, and perhaps most important types of harms that cannot be remedied by existing regulations. He points out that the key is whether broad consensus can be reached. , could be remedied through new regulations.

But Arthur “Bernie” McCub, executive director of the University of Arizona's Institute for Computational and Data Insights, is skeptical that the government will be able to create fair and meaningful AI regulations. “The rapid evolution of AI will always outpace the creation of comprehensive regulation, making it nearly impossible for government intervention to keep up with technological advances,” he explained in an email interview.

Potential alternatives

Makabe advocates self-regulation. “Governments should support industry-led efforts to regulate AI where the industry demonstrates effective self-regulatory practices,” he said. “For example, the financial sector has successfully introduced self-regulatory processes through organizations like the National Futures Association to ensure the adequacy of product development.” A similar model, he said, could be applied to AI regulation. there is.

Related:Negotiating the future of AI regulation and liability

There is ample incentive for self-regulation, Rao said, and market dynamics act as effective guidance. “In such a scenario, it makes sense for governments to refrain from imposing regulations. However, the risks associated with AI will be greater, a small number of actors will compete for dominance, and society's broader values ​​will be challenged. A hands-off attitude is not advisable if it could potentially compromise the commercial interests. ”

Another possibility, Makabe said, is to seek guidance from relevant professional societies, such as the Association for Computing Machinery (ACM) or the Institute of Electrical and Electronics Engineers (IEEE). “These professional societies have a strong foundation in ethics, are international and nonpartisan, and have the expertise needed to delve into the technical details and potential impacts of regulations.”

Unnecessary evil?

For now, Engraff says governments should leave AI alone. “We do not yet have enough information to predict with confidence that the proposed regulations will actually reduce risk without unduly harming or inhibiting the competitive environment or entrepreneurship.” she explains. “Governments should only proceed with any new legislation if there has been strong stakeholder engagement, with all types of parties affected by the regulation and those speaking from broader societal interests. It is.”

In general, McCabe says, government regulation should be viewed as a last resort when other forms of regulation fail to address immediate needs. He points out that when government regulation is required, it is essential that the process is swift, transparent and reliable. “The goal is to ensure that AI is developed and deployed in a way that benefits society as a whole while minimizing potential harm.”





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *