The attack on Sam Altman exposed the dark underbelly of the anti-AI movement

AI For Business



new york

Mainstream artificial intelligence safety groups quickly distanced themselves from last week’s alleged attack on OpenAI CEO Sam Altman’s home by a 20-year-old boy, after law enforcement officials said the attack appeared to be part of a plot to harm the AI ​​executive. However, some people on the internet cheered this attack.

In a post, one X user compared the attackers to Luigi Mangione, who is accused of killing UnitedHealthcare CEO Brian Thompson in a politically motivated attack, and called the two “heroes.”

Multiple X users claimed the attack was “justified.”

“If this relentless push towards AI and the complete (sic) commoditization of being human is allowed to continue, episodes like this will become more common,” one user posted in an anti-AI Reddit group.

As technology advances rapidly, there are growing concerns that AI will displace human jobs, upend economies, harm the environment, and even pose an existential threat to humanity. Even technology company executives are issuing stark warnings.

But the recent attacks represent the tail end of the anti-AI movement, moving from anonymous online comments to risky in-person actions, sparking a debate in Silicon Valley about how to respond.

Three days before the attack on Altman’s home, Indianapolis City Councilman Ron Gibson was reportedly shot at his home in the middle of the night after a data center was approved in his district, and a “No Data Center” note was left on his front door.

There have also been reports of vandalism and attacks on robotaxis and delivery robots in recent years, which some see as a harbinger of a high-tech future that not everyone was hoping for.

“[AI]is such a huge and pressing problem that people frankly don’t understand it and are just vaguely afraid of it,” said Doug McAdam, a sociology professor at Stanford University who studies political and social movements. He added that it was “not unusual” for such movements to “give rise to a radical dimension.”

“Ensuring society properly understands AI requires a democratic process, and robust debate about ideas is an important part of a healthy democracy,” OpenAI said in a statement after the attack. “However, there is no place in our democracy for violence against anyone, regardless of the AI ​​lab they work in or their side of the debate. We are grateful for the quick response of law enforcement and that no one was hurt.”

Daniel Moreno-Gama, who is currently being held without bail, had spent time in an online space dedicated to discussing AI risks before the attack.

In an online interaction with the hosts of the AI ​​podcast “The Last Invention,” Moreno-Gama mentioned the United Healthcare CEO murder suspect and discussed “Luigi the tech CEO.”

The group acknowledged that Moreno-Gama also posted on the Discord server of PauseAI, an organization that advocates for a pause on advanced AI development so security measures can catch up, in the weeks before the attack. PauseAI denied the attack and said Moreno-Gama was not an official member. The Discord server is public and anyone can join.

“We exist to provide people with a peaceful and democratic path to address their concerns about AI, so this attack is the antithesis of everything we stand for,” Maxime Fornes, CEO of PauseAI, told CNN.

OpenAI CEO Sam Altman shared a photo of his family in a blog post after the attack, saying he hoped this would deter further violence.

Stop AI, another group calling for a halt to the development of advanced AI, said on Tuesday that Morenogama had asked in an online forum earlier this year, “Will I get banned for talking about violence?” The group said he stopped posting after being told “yes.”

“Stop AI has always been committed to nonviolence. Stop AI’s current leadership is deeply committed to nonviolence, both in action and speech,” Stop AI said in a post on X, adding that its co-founders were removed from the group last year for making “provocative statements regarding violence.”

During the attack, Moreno-Gama discussed “the alleged risks that AI poses to humanity,” wrote about Altman’s murder, and carried a document listing “the names and addresses of potential directors, CEOs, and investors of AI companies,” according to a criminal complaint filed by the FBI.

Morenogama’s lawyer, San Francisco public defender Diamond Ward, said in court this week that Morenogama was in the midst of a mental health crisis during the incident. Ward said his client was being overcharged for “at best a property crime,” according to the Associated Press. According to the Associated Press, Morenogama’s parents said in a statement that Morenogama had recently started having mental health issues and had never harmed anyone, adding that they were worried about his health.

Some in the AI ​​industry were already concerned. For example, OpenAI has long encouraged employees to remove their badges before leaving work.

Fones said he was concerned about the possibility of more violent riots, saying such attacks could paint the complex and diverse but overwhelmingly peaceful AI safety movement in a negative light.

“Our response to this will be to redouble what we have been doing: peaceful and lawful advocacy,” he said. “I think it’s very important that a completely peaceful movement like ours stay aware of what’s going on, because darker movements could start to emerge.”

History shows that radical uprisings can generate greater confidence for more moderate wings of social movements, McAdam said.

AI companies “will have to think seriously about how to respond,” McAdam said. “Despite the criticism of this extremist group, the movement as a whole is increasing in profile and influence.”

That discussion has already begun.

Chris Lehane, head of global policy at OpenAI, said in an interview with the San Francisco Standard on Tuesday that some of the criticisms of AI are “not necessarily to blame.” “When you put these thoughts and ideas out there, there are always consequences,” he said, adding that the company needs to make clear that AI is “really good for them, their families, and society as a whole.”

His colleague Jason Wolfe, a member of OpenAI’s technical staff who works on alignment, a field that specializes in making AI models reflect human needs and values, publicly disagreed in Thursday’s X post.

“We believe our job is to earn trust by realizing the benefits, being honest about the risks and uncertainties, sharing what we learn, measuring real-world impact, and supporting public oversight and resilience,” Wolf said. “And while I certainly agree that the recent violence is horribly unjust and may have been incited by a few bad actors, I don’t think it’s good for public discourse to lump AI critics together as ‘ruiners’ and suggest that it’s inappropriate for them to voice their concerns.”

Asked for comment on this article, OpenAI pointed to Wolfe’s follow-up post and said it had reviewed Lehane’s full quote in the Standard and had changed its impression of the comment. Wolf said he believes policy leaders have made it clear that “communication about the benefits of AI should be honest and objective” and that “there are objective and legitimate reasons for questioning.”

–CNN’s Hadas Gold contributed reporting.



Source link