CEOs need to stay away from AI regulation

AI For Business


The author is Director of International Policy at Stanford University’s Center for Cyber ​​Policy and Special Advisor to Margrethe Vestager.

Tech companies recognize that the race for AI supremacy will be determined not only by the market, but also in Washington and Brussels. The rules governing the development and integration of AI products, which will have lasting impact on companies, remain up in the air for now. So management is preemptively setting the tone by claiming they are in the best position to regulate the very technology they create. AI may be new, but the points are reused. They are the same ones Mark Zuckerberg used for his social media and Sam Bankman his Freed offer for cryptocurrencies. Statements like this shouldn’t distract Democrats again.

JP Morgan Chief Executive Says Financial Instruments Are Too Complex For Lawmakers To Understand, Banks Will Decide How To Prevent Money Laundering, Detect Fraud And Set Liquidity And Lending Ratios Imagine explaining to Congress that we need to. He will be laughed at from the room. Angry voters will point to how self-regulation worked during the global financial crisis. From big tobacco to big oil, we’ve learned the hard way that companies can’t set regulations that disregard their interests. They are not independent, nor are they capable of creating forces against them.

Somehow that basic truth is lost when it comes to AI. Lawmakers want to follow the voice of companies and are seeking guidance on regulation. Senators asked OpenAI chief executive Sam Altman to name potential industry leaders to oversee the alleged national AI regulator.

Within the industry, calls for AI regulation are apocalyptic. Scientists warn that their creations are too powerful and could be cheating. A recent letter signed by Altman and others warned that AI poses a threat to human survival akin to nuclear war. You might think these concerns would spur executives into action, but virtually none of them have changed their behavior despite signing. Perhaps their real goal is to frame how we think about guardrails around AI. Our ability to resolve questions about the type of regulation required is also heavily influenced by our understanding of the technology itself. The statement highlights the existential risks of AI. But critics argue that prioritizing prevention of this in the future will overshadow much-needed efforts to combat discrimination and prejudice that should be made today.

Warnings about the catastrophic risks of AI are disorienting, supported by those who should stop pushing AI products out into society. In the open letter, the signatories appear powerless to make desperate appeals. But those ringing the alarm already have the power to slow or stop potentially dangerous advances in artificial intelligence.

Former Google CEO Eric Schmidt argues that only companies can develop guardrails, not governments. But legislators and officials are not experts in agriculture, crime-fighting or drug prescribing, yet they regulate all these activities. The complexity of AI should not discourage them. Rather, we should encourage AI to take responsibility. And Schmidt inadvertently reminded us of the first challenge: breaking monopolies in access to sensitive information. With independent research, realistic risk assessments and guidelines for enforcing existing regulations, the debate about the need for new measures is factual.

Actions speak louder than words. Days after Sam Altman welcomed AI regulation in congressional testimony, he threatened to stop OpenAI from operating in Europe because of it. Realizing that EU regulators didn’t like his threats, he returned to his charm offensive and promised to open offices in Europe.

Legislators should remember that businesspeople are primarily concerned with profit, not social impact. Now is the time to move beyond fun and define specific goals and methods for AI regulation. Policy makers should not let technology company CEOs shape and control the narrative, let alone the process.

A decade of technological turmoil has highlighted the importance of independent oversight. This principle becomes even more important when power over technologies such as AI is concentrated in the hands of a few companies. We should listen to the influential individuals who run them, but not take their word for it. Their grand claims and ambitions must instead drive regulators and legislators to act on their own expertise—expertise in the democratic process.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *