Don’t Blame Us For AI Threats To Humanity, We’re Just Technicians

AI News


So let’s think about it. Instead of promoting technology that its primary inventors claim could soon have the power to kill humans, why not?

The radical idea was prompted by a warning from the person who set up the Prime Minister’s Artificial Intelligence Task Force. Matt Clifford said, “From where the model is expected to be in two years’ time, there could be very dangerous threats to humans that could kill many, if not all, humans. ‘ said. Come to think of it, maybe I’m overreacting. His full statement is more nuanced and not all human anyway. Many of them just.

But equally apocalyptic warnings come from its creators, who are writing it under the auspices of the AI ​​Safety Center. In an admirably succinct warning, AI industry officials said, “Reducing the risk of AI-induced extinction should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. It is,” he emphasized. The heads of Google DeepMind, OpenAI and a host of other companies have taken time away from inventing technology that could wipe out human life, saying something really should be done to stop this happening to the rest of the world. warned people.

And are these guys supposed to be geniuses? The pot shed in England is full of quirky people who have invented a new machine that might be great but could burn down your house. And most of them probably manage to figure it out for themselves that the device is not like that. A great idea after all.

This is where the minnow inventors go wrong. Perhaps all they really had to do was get billions of pounds worth of VC money instead of figuring out the risks themselves and write a letter to local councils warning them they really should be in control. I guess it was to write.

Seriously, I recognize that great things are expected of artificial intelligence, but much of it has nothing to do with the extinction of humanity. Many argue that AI could play a pivotal role in achieving a carbon-free future, but perhaps that’s just a euphemism for annihilating humanity.

Equally important, we cannot fail to invent progress that has already been achieved. But AI chatbots are already falsifying information, “hallucinating,” as their developers like to put it, and why their inventors aren’t quite sure. So there seems to be an argument that we should slow down and smooth out the small wrinkles before moving to extinction-level technology.

A liberal view of tech leaders calling for self-restraint is that they are responsible and that they are concerned about other irresponsible actors. They want to do more, but as you know, Googlers can’t afford to lose to Microsofters.

These warnings are therefore an attempt to get politicians and regulators to take action, and world leaders have a glorious track record of cooperating and wisely responding to extinction-level threats. It’s terrible to instigate this, given that there are I mean, come on. They mentioned it to the US Congress. I don’t think you can ask more. And the British government, which is now in charge of the case, would be more reassuring if it hadn’t struggled to process asylum seekers in 18 months.

With any luck, the warning will actually shock governments into taking useful action. Perhaps this will lead to global standards, international agreements and a moratorium on killer development.

In any case, the consciences of the AI ​​masters are now clear. they did everything they could. And if, someday, sometime around 2025, machines gain the power to actually obliterate us – unfortunately, many of us – in the last seconds, AI willfully fail to become a superior I’d like to think I’ll do the final research into my brain. At that stage we are using technology that could destroy us without figuring out how to stop it.

“Why did you take the risk?” Skynet asks. And in the final seconds the geniuses answer: We signed the statement. ”

Follow Robert on Twitter @robertshrimsley and email him robert.shrimsley@ft.com

follow @FTMag Check out the latest stories on Twitter first





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *