Tech companies agree to AI 'kill switch' to prevent Terminator-like risks

AI For Business


You can't put AI back into Pandora's box. But the world's largest AI companies have voluntarily announced a new deal to address the biggest concerns surrounding the technology and allay concerns that unchecked AI development could lead to sci-fi scenarios where AI turns against its creators. We are working with the government. But without strict legal provisions to strengthen governments' AI efforts, the debate will only go so far.

This morning, 16 influential AI companies, including Anthropic, Microsoft, and OpenAI, 10 countries, and the European Union met in Seoul, South Korea, for a summit to develop guidelines for responsible AI development. One of the big outcomes of yesterday's summit was that the AI ​​companies in attendance agreed to a so-called “kill switch,” a policy that would halt development of cutting-edge AI models if they were deemed to have exceeded a certain risk threshold. That's what I did. However, it is unclear how effective this policy will be in practice, given the failure to attach real legal weight to the agreement or define specific risk thresholds. Other of his AI companies that were not present, as well as competitors of companies that have mentally agreed to the terms, are not subject to the pledge.

A policy document signed by AI companies such as Amazon, Google, and Samsung states: “At the extreme, organizations may not be able to develop or deploy models or systems if mitigations cannot be applied to reduce risk below a threshold.'' You promise not to do it at all.” The summit follows the Bletchley Park AI Safety Summit held in October last year, which brought together similar AI developers and argued that there is a lack of viable short-term efforts to protect humanity from proliferation. It was criticized as “valuable but pointless.'' A.I.

Following the last summit, a group of participants wrote an open letter criticizing the forum's lack of formal rulemaking and the outsized role of AI companies in driving regulation of their own industries. “Experience has shown that the best way to address these harms is through enforceable regulatory mandates, rather than through self-regulation or voluntary action,” the letter said. I am.

Writers and researchers have been warning about the risks of powerful artificial intelligence for decades, first in science fiction and now in the real world. One of the most well-known references is the “Terminator Scenario.” This is the theory that if left unchecked, AI could become more powerful than its human creators and could even attack it. The theory's name comes from the 1984 Arnold Schwarzenegger film. The film features a woman whose cyborg travels back in time to kill her unborn son, who ends up fighting her AI system, which plans to cause a nuclear holocaust.

“AI offers tremendous opportunities to transform the economy and solve our biggest challenges, but we can only maximize this potential if we understand the risks posed by this rapidly evolving and complex technology.” I have always been clear that it can be withdrawn,” said the UK Technology Secretary. Michelle Donnellan said.

AI companies themselves recognize that their cutting-edge products are venturing into technologically and morally uncharted territory. OpenAI CEO Sam Altman said artificial general intelligence (AGI), which he defines as AI that surpasses human intelligence, is “on the horizon,” but it comes with risks.

“AGI will also come with significant risks, including abuse, serious accidents, and social disruption,” OpenAI's blog post says. “The benefits of AGI are so great that we do not believe it is possible or desirable for society to permanently halt its development. Instead, society and AGI developers need to understand how to properly realize it.” you need to find out.”

But so far, efforts to cobble together a global regulatory framework around AI have been scattered and largely lack legislative authority. A UN policy framework calling on countries to protect against AI risks to human rights, monitor the use of personal data and reduce AI risks was approved unanimously last month, but was not binding. The Bletchley Declaration, which was the centerpiece of the Global AI Summit in the UK last October, contained no specific regulatory commitments.

Meanwhile, AI companies themselves are starting to establish their own organizations to promote AI policy. Yesterday, Amazon and Meta joined the Frontier Model Foundation, an industry nonprofit organization “dedicated to improving the safety of frontier AI models,” according to its website. They join founding members Anthropic, Google, Microsoft, and OpenAI. The nonprofit group has not yet come up with a firm policy proposal.

Governments are becoming more successful. President Biden's executive order on AI safety regulation issued last October included strict legal requirements that go beyond the vague promises outlined in other similar policies, making it “the government preemptive''. It was praised by senior government officials as a “first-of-its-kind case.” For example, President Biden invoked the Defense Production Act to require AI companies to share safety test results with the government. The EU and China have also enacted formal policies to address issues such as copyright law and the collection of users' personal information.

States are also taking action, with Colorado Governor Jared Polis yesterday announcing a new bill that would ban algorithmic discrimination in AI and require developers to share internal data with state regulators to ensure they are in compliance.

This won't be the last chance for global AI regulation. France is set to host another summit early next year, following the meetings in Seoul and Bletchley Park. By then, participants say they will have produced a formal definition of risk benchmarks that would require regulatory action, a major step forward for what has been a relatively cautious process so far.

Subscribe to the Eye on AI newsletter to stay up to date on how AI is shaping the future of business. You can apply for free.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *