Last week, artificial intelligence pioneers and experts urged major AI labs to immediately suspend training AI systems stronger than GPT-4 for at least six months.
An open letter prepared by the Future of Life Institute warns that AI systems with “competitive intelligence” could pose a major threat to humanity. Among the risks is that AI could betray humans, render us obsolete, and take over civilization.
The letter highlights the need to develop a comprehensive set of protocols to manage AI development and deployment. it says:
These protocols must ensure that systems conforming to them are secure beyond reasonable doubt. This does not mean a pause in AI development in general. It just means a regression from dangerous competition to ever-larger, more unpredictable black-box models with new capabilities.
Regulation battles have typically pitted governments and big tech companies against each other. But a recent open letter has been signed by more than 5,000 signatories so far, including Twitter and Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and OpenAI scientist Jonas Kassa. , which seems to suggest that more parties are finally converging on one side.
Can we really implement a streamlined global framework for AI regulation? If so, what would this look like?
Read more: I used to work at Google, now I’m an AI researcher. Here’s why it’s wise to delay AI development
What regulations already exist?
In Australia, the government established the National AI Center to help develop the country’s AI and digital ecosystem. Under this umbrella is the Responsible AI Network, which aims to promote responsible practice and provide leadership on laws and standards.
However, there are currently no specific regulations for AI and algorithmic decision making. Governments are taking a light-touch approach to broadly embracing the concept of responsible AI, but fall short of setting the parameters to ensure that it is achieved.
Similarly, the United States has adopted a no-interference strategy. Lawmakers have shown no urgency in trying to regulate AI and have relied on existing laws to regulate its use. The U.S. Chamber of Commerce recently called for AI regulation to keep it from stifling growth or becoming a national security risk, but no action has yet been taken.
The European Union is leading the way in AI regulation and is rushing to create an artificial intelligence law. This proposed law assigns three risk categories related to AI.
- Applications and systems that create “unacceptable risk”, such as government-run social scoring used in China, are prohibited.
- Applications deemed “high risk”, such as CV scanning tools that rank job applicants, are subject to specific legal requirements.
- All other applications are largely unregulated.
Some groups have argued that the EU’s approach stifles innovation, but Australia balances providing predictability with keeping pace with the development of AI, which is why Australia has to monitor closely. need to do it.
China’s approach to AI is focused on targeting specific algorithmic applications and creating rules that address deployment in specific contexts, such as algorithms that generate harmful information. While this approach offers specificity, it runs the risk of having rules that lag behind rapidly evolving technologies.
Read more: AI chatbots with Chinese characteristics: Why Baidu’s ChatGPT rivals will never catch up
pros and cons
There are several arguments both for and against allowing attention to drive control of AI.
On the one hand, AI is celebrated for being able to generate content in all forms, handle mundane tasks, and detect cancer. On the one hand, it can deceive, perpetuate prejudice, plagiarize and, of course, some experts worry about the future of humanity as a whole.Even his CTO of OpenAI, Mira Murati It suggests that there should be a movement to regulate.
Some scholars argue that over-regulation could completely thwart AI’s potential and prevent “creative destruction.” This theory suggests that longstanding norms and practices must be pulled apart for innovation to thrive.
Similarly, over the years business groups have pushed flexible, targeted and application-limited regulations so as not to stifle competition. Industry groups also want ethical “guidance” rather than regulation, arguing that AI development is moving too fast and too limited to be properly regulated.
But citizens seem to advocate for more surveillance. About two-thirds of people in Australia and the UK believe the AI industry should be regulated and accountable, according to Bristows and his KPMG report.
what’s next?
A six-month pause in the development of advanced AI systems could provide a temporary respite from the seemingly unstoppable AI arms race. However, so far there has been no effective global effort to meaningfully regulate AI. Efforts around the world are fragmented, delayed, and generally lax.
Enforcing a global moratorium will be difficult, but not impossible. The open letter questions the government’s role in largely staying silent about the potential harm of highly capable AI tools.
If anything needs to change, governments, national and supranational regulatory bodies must take the lead to ensure accountability and safety. As the letter argues, decisions about AI at the societal level should not be left in the hands of “unelected technology leaders.”
Governments should therefore work with industry to jointly develop a global framework that sets out comprehensive rules governing AI development. This is the best way to protect against harmful influences and avoid racing to the bottom. It also avoids the undesirable situation where governments and big tech companies struggle to control the future of AI.
Read more: AI arms race highlights urgent need for responsible innovation