AI Doomers playing with fire

Machine Learning


After OpenAI’s ChatGPT debuted in late 2022, it didn’t take long for mainstream America to start hearing about this warning. Executives at major AI companies say they are building radical new technologies that pose pressing risks to society. And it wasn’t just about digital security. AI had the power to destroy the entire world.

From the jump, it was clear that these warnings were as much a sales ploy as they were serious predictions about how AI would work and the ripple effects it would create. AI executives testified before Congress about how scary it was to pitch their products to governments, effectively begging for regulation. Now those executives are telling everyone to calm down.

Chris Lehane, OpenAI’s global policy chief, gave an interview to the San Francisco Standard this week following at least one attack on CEO Sam Altman’s home.

“Some of the conversations that are going on out there are not necessarily responsible,” Lehane told the Standard. “And when you put those thoughts and ideas out there, there are always consequences.”

Lehane was referring to the person who allegedly threw a Molotov cocktail at Altman’s home a week ago. Daniel Morenogama, a 20-year-old Texas resident, is accused of throwing an incendiary device at Altman’s home, where he smashed a glass door with a chair, before going to OpenAI’s headquarters.

Police said Morenogama was carrying anti-AI “documents” that suggested his motives were related to concerns about artificial intelligence and existential threats. The Wall Street Journal reported that he was calling “tech CEOs Luigi,” after Luigi Mangione, who is charged with murder for killing the CEO of UnitedHealth.

Just two days later, the first suspect was released, although a second incident in which two people allegedly fired a gun near Altman’s home is still under investigation.

Lehane divides the world into two groups of people. These are the people who think AI is the best thing ever and will inevitably lead to a world of abundance and leisure. And the people he calls destroyers are people who “have a very, very negative, dark view of humanity.”

Lehane argues that so-called AI doomers simply haven’t properly marketed the benefits of this new technology. “Our job in OpenAI and the AI ​​space, and we need to do a better job, is to explain to people why…this is going to be really good for them, their families and society as a whole,” Lehane told the Standard.

But given what people like Altman have been saying, it’s hard to take that argument seriously. It didn’t even start in 2022. In 2015, Altman said, “I think AI will probably bring about some sort of apocalypse. But by then, we’ll have great companies created using serious machine learning.”

How do you accept something like that when you hear something like that from someone in power? You have two choices. To dismiss Altman as unserious and to think that humanity should do nothing. Or you can just take a tech company’s CEO at his word that the technology he’s building could end the world. The question then remains: what can be done about it?

There is no destiny, but we make it

We know what happens in dystopian novels. in Terminator 2: Judgment DaySarah Connor decides that she needs to kill the researcher most responsible for the activation of Skynet and the rise of the Machines. She doesn’t feel like doing it, but after explaining what will happen in the future, the researcher helps her access the technology so she can destroy it.

Altman also signed a letter on the “risk of extinction”, warning that AI could be used to “design new biological pathogens” if it is not domesticated. But he also tries to argue that the United States should be the one to develop these potentially devastating technologies, because leaving them to a geopolitical adversary carries its own risks.

Altman said in 2023, “A misaligned superintelligence AGI could cause serious harm to the world, and so could authoritarian regimes that are decisively led by a superintelligence.”

I turned to Mr. Altman’s product, ChatGPT, and asked him for his comments on the existential threat to humanity. Specifically, he asked if Mr. Altman talked about rogue AI and the end of the world on Joe Rogan’s podcast. Interestingly, ChatGPT said he is not appearing in Logan. In fact, Altman appeared in episode 2044 of The Joe Rogan Experience, which was first released on October 6, 2023.

After fixing ChatGPT, the now cliche “you’re right, etc.” The quote it gave me:

  • “There are risks…if this technology doesn’t work, it could go quite wrong.”
  • “What I’m worried about is losing control of the system…”
  • “This can go really, really wrong…like the lights out mistake.”

As far as I know, the last quote is not accurate. It’s not in the YouTube transcript for this episode. But Altman said something very close to this in an interview on the StrictlyVC podcast. “The worst case scenario, and I think this is important, is that we all go lights out,” Altman explained to the packed room. It’s close, but not exact. This perhaps shows how AI systems are detracting from people’s real-life experiences.

Anthropic CEO Dario Amodei echoed similar sentiments, telling Axios earlier this year: “Humanity is on the verge of gaining almost unimaginable power, but it is highly uncertain whether our social, political, and technological systems have the maturity to wield it.” Amodei claims that he is “terrified by AI authoritarianism.”

Amodei also warned that anyone with a STEM degree could create biological weapons with the help of AI models, and called for guardrails to be put in place. Some of these guardrails are putting Anthropic in trouble, as the Pentagon blacklists the company and seeks to banish Claude from the system. Amodei had refused to lift safeguards barring the use of Claude in domestic mass surveillance or autonomous weapons systems.

If someone testifies that they have created a tool that could potentially end the world, you would expect that person to be immediately handcuffed and paraded away. This is an idea that came to me a few years ago from a third party, but I wanted to know who said it first. But that’s spot on.

Think about it in other contexts. Someone said they had developed a weapon that could go berserk and literally end life on Earth. Is the federal government acting as if light regulation tinkering around the edges is the only solution? Or will the company’s executives be rounded up and thrown in jail for making terrorist threats?

Threatening to completely eliminate a livelihood is a threat to human life

Apart from the rise of Skynet, there is clearly the pressing issue of unemployment. Over the past year, many companies have cited AI as a reason for layoffs, and some may be motivated to use it as a convenient excuse. However, there is no denying that AI is capable of causing some disruption in the labor market when it comes to writing and other white-collar jobs.

AI CEOs are eager to tell everyone that this disruption is coming and insist that governments should deal with it, while also lobbying those same governments to stay out of the way. Perhaps no one embodies this attitude better than Elon Musk, whose xAI is developing the Grok AI chatbot.

“Universal high income through checks issued by the federal government is the best way to address AI-induced unemployment,” Musk wrote on Friday. “AI/robotic technology will produce goods and services that far exceed the increase in money supply, so there will be no inflation.”

I have previously argued that it is absurd for Mr. Musk to claim that government will create a utopian world of abundance. Last year, when Musk was a Trump henchman, the billionaire supported the complete destruction of USAID, cutting funding to critical programs and slamming those he says were milking the system.

His so-called Department of Government Efficiency (DOGE) helped purge about 300,000 federal employees and made it his mission to argue that undeserving people don’t deserve government benefits. Is this the person who says we don’t need to worry about AI because the government will hand out free money? That’s ridiculous.

Why would anyone try to sell a product to the public thinking it will take away their jobs? Because this pitch is aimed at investors, governments, and people who buy enterprise software for businesses. The focus should be on making Avatar look like a Studio Ghibli movie.

An unelected ruling class makes decisions for everyone

All the AI ​​elites are touting their products as inevitable. Part of their sales pitch is that there’s nothing you can do to stop it. And the public just needs to accept it while finding ways to work within the system where AI causes job losses. These oligarchs, and they are just oligarchs, are competing to be supporters of the ruling class, but they are not elected. But still, if you’re lucky enough to survive a robot uprising, it’s likely that they’ll dictate your life for the next year, five years, or even 20 years.

Altman himself wrote a blog post a week ago following the attack on his home. He shared a photo of his husband and children, saying, “No matter what they think of me, I hope it will deter the next person from throwing a Molotov cocktail at our house.” And Altman appears to be doing his best to humanize himself in order to deter the possibility of further attacks.

No matter what happens, it feels like AI executives have pushed themselves into a corner. They have told everyone that their product has the potential to destroy everything. They were destroyers, if we want to call them that, at least when it was convenient. And now we seem to be entering another era. The same people who told us about the dangers of AI are now forcing us to see only what they claim will be of great benefit to society. So far, there’s little to show for it.

It’s unclear how that destructive genie was put back in the bottle.



Source link