The EU has been told to put AI or “Chernobyl-sized disaster” at risk

AI News


EU AI laws are not enough to save humanity from extinction, world-renowned AI expert Stuart Russell told Euractiv.

AI regulations have fallen out of fashion worldwide as countries and regions compete for geopolitical control in artificial intelligence. The EU is no exception. The European Commission is considering implementing a pause AI law as it is increasing industry pressure, and activists fear the dilution of general purpose AI (GPAI) regulations with a new code of practice that is expected to be released in the coming days.

Russell, a Berkeley, California professor of computer science, joined the open letter of the final groove, urging the EU to “resist the pressure” to derail the code from the industry's final push after a year of intense lobbying.

“[To industry]It doesn't matter what the document says. Companies don't want to have regulations at all,” Russell told Euractiv.

Russell and fellow signers Includes Nobel Prize winners Jeffrey Hinton and Daron Acemogl It's a disaster recipe. They want forced third-party audits to be burned into code, and they cannot simply argue that GPAI like CHATGPT is safe without checking.

However, according to Russell, AI laws in their strongest form are too tolerant to protect against future risks. “Even when the system is very dangerous…the rules have nothing to say that you can't access the market,” he warned.

“If we have a system that can control our civilizations and planets, it would be ridiculous to win a single-digit percentage of the world's largest revenue,” he added.

extinction

Russell's view is controversial.

The authors of AI's leading textbooks belong to a pioneering growth group who believe that technology poses existential threats. Others dismiss this “AI Doomerism” as speculative science fiction.

“I think that's strange. The press continues to characterize these. [existential AI] The risk of fringe… But looking at the top five CEOs or top five AI researchers in the world, except for Jan Lekun, everything says: No, this is true. ”

Even Ursula von der Leyen of the European Commission cited the “risk of extinction” of AI in a 2023 speech. In May, she warned that AI could “get closer to human reasoning” next year.

But no meaningful actions have been taken to address such risks, Russell said. He fears that real restrictions will only come in response to “Chernobyl-sized disasters.”

He argues that the actual regulations include similar safety proofs to what is needed for a nuclear power plant, but higher safety thresholds.

“But you're not going to get anything close to mathematical guarantees,” Russell said. “Companies don't think slightly about how the system works.”

For now, all he can expect is mandatory external tests in the EU's future code of practice.

“That's not enough… but it would be pretty useful,” he added.

(nl, de)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *