Regulating AI puts businesses and governments at odds

AI For Business


The author is the International Policy Director of the Cyber ​​Policy Center at Stanford University and special advisor to Margrethe Vestager.

The rapid release of generative AI tools was a moment of computation. Just this week, a prominent AI developer, Geoffrey Hinton, announced he was leaving his job at Google, regretting his job and speaking freely about the dangers and risks of the technology he created. I said I want to

Elon Musk, in typical contrarian fashion, warned that AI could destroy civilization and founded an AI company. He previously invested in his OpenAI, which develops generative AI tools such as GPT-4.

Musk joins AI experts and some industry leaders in calling for a six-month moratorium on generative AI development, perhaps urging policymakers to put rules in place within that time. Clearly they have never experienced a democratic legislative process. The adoption and implementation of AI legislation, not to mention the establishment of new regulatory bodies, will take years.

Others suggest interrogating AI executives under oath and compiling a record of the security issues they encounter. But previous hearings against Meta’s Mark Zuckerberg and Google’s Sundar Pichai left no mark on the business models of the social media and search giants, nor did any laws limit their powers be imposed. bottom.

Suddenly everyone wants to regulate AI. An open letter was drafted and legislative proposals were discussed. Unfortunately, the discrepancy between the characteristics of AI and the solutions offered exposes a deep misunderstanding between those who develop and market AI and those who formulate policies and vote for new laws. .

Politicians around the world understand that something must be done urgently. They are now racing to set new rules. Republicans and Democrats, Chinese and European governments alike are trying to curb the threat of AI in a rare moment of political unity (albeit for their own political reasons). The EU is the most advanced in outlining what guardrails should look like. The EU AI law primarily considers what risks AI poses to employment, education or access to human rights once it is deployed. However, EU officials acknowledge that generative AI (AI as a technology) is left out due to their focus on AI applications. The next series of breakthroughs will make today’s synthetic media look primitive. We don’t know what will happen next, but we do know that new technologies will continue to emerge. Regulations adopted today must also address future iterations.

This is a problem that needs to be solved, and innovation in policy itself is required. (It would be great if AI developers updated their understanding of the rule of law, but I’ve learned to lower my expectations.) What excites engineers worries regulators. An AI system’s risk lies not only in its particular application, but also in the question of who controls his AI system in the first place. At the moment, corporations run the show, which is dangerous for democracy.

Successful AI regulation must address three areas. First, we need to rebalance the dynamics between AI developers and the rest of society. This asymmetry is already so important that only the largest tech companies can develop AI, both because of their access to datasets and their ability to train and process them. Even wealthy universities like Stanford, which train top AI engineers, don’t have the data and computing power of neighboring Silicon Valley companies. As a result, the secrets of the inner workings of AI that have so much impact on society remain locked up in corporate systems.

The second issue is access to information. To allow lawmakers to see the inner workings of her AI, we need a means to protect the public interest. There is no public understanding of the algorithms that govern social impact apps. As a result, fact-based debate, focused public policy, and necessary accountability mechanisms are hindered.

And third, we cannot ignore the ever-changing nature of AI. Regulations must be flexible and enforceable. This may include keeping logs so that the impact can be recorded when settings are adjusted.

There is political will to regulate AI, but the road ahead is difficult. Both AI experts and legislators can benefit from a deeper understanding of the other. Computer scientists need to understand the impact of AI on democracy, and regulators need to dig deeper into how AI works. The gap between them further hinders the development of regulations that match the power of AI, and that discrepancy creates its own risks.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *