Social media say more than 1,000 tech and AI luminaries have signed a petition asking the industry to suspend training for six months on artificial intelligence (AI) systems stronger than OpenAI’s GPT-4. We are buzzing with news.
The signatories are a diverse list that includes tech giant Elon Musk, tech legend Steve Wozniak, and former Democratic presidential candidate and futurist Andrew Yang.
Jans Twitter post Kevin Roose, technology reporter for the New York Times and author of “Futureproof,” shared his research on Yang’s podcast. In it, Roose asked whether AI automation will replace millions of jobs by 2030 — roughly 75% to 80% said yes — and AI will replace millions of jobs by 2030. Only 20% said “yes” if they replaced their jobs with
“I have the idea that these tools are incredibly powerful, creative and disruptive and will change the entire economy in the next 10 years, but it’s not my job,” said Roose. “I am special. I am unique. I am creative. I am human. I am untouchable. So I think there is a lot of wishful thinking and almost arrogance in this. I’ve come to terms with the eventual obsolescence of .I hope the robot overlords will show me mercy when they put me out of work.”
Some security experts saw the moment as largely media hype, speaking of the need for the industry to develop a more sober approach to AI development and even get governments involved.
“The cat came out of the bag” “The genie came out of the bottle”
Coalfire Vice President Andrew Barratt said: “It is also impossible to do this globally and collaboratively. AI will be the productivity enabler for the next few generations. To be monetized by advertisers who place their products in answers. ”
According to Barratt, the surge in fear is largely due to the recent focus on ChatGPT. Instead of pausing, the industry looks to knowledge workers around the world to take full advantage of a variety of increasingly consumer-friendly AI tools to improve their productivity, he said. I said we should encourage them to do so.
“People who don’t do it get left behind,” says Barratt.
Dan Shiebler, head of machine learning at Abnormal Security, said it was interesting to see how diverse the signers and their motivations were. For example, according to Siebler, Musk has been quite vocal about believing AGI (computers finding ways to make themselves better, resulting in exploding capabilities) to be an imminent danger. I’m talking Gary Marcus You are clearly coming to this letter from a different angle. Aside from being cynical about masks, Marcus recently said, “He looks really worried.”
“Personally, I don’t think the letter will do much,” said Siebler. “The cats are out of the bag with these large scale language models. The limiting factors in generating them are money and time, both of which degrade rapidly. And we need to be ready to use it reliably, not trying to stop the development clock.”
Netenrich’s chief threat hunter, John Bambenek, doubts anyone will put anything on hold, but there’s a growing realization that considerations about the ethical implications of AI projects have lagged far behind the pace of development. I added that there are
“We’ve already seen some epic failures when it comes to mindless AI/ML deployments, so it’s good to re-evaluate what we’re doing and the serious implications it can have. I think,” said Bambenek.
Kevin Bocek, vice president of ecosystem and community at Venafi, says that in a perfect world, slowing down and getting all the ducks in line would be a great idea. Bocek said the rush to adopt and develop AI has serious implications.
“But really, it’s a stupid, impractical idea,” says Bocek. “That genie is really good. This is like saying you shouldn’t use encryption because criminals can use it. Criminals always use it.” We will never get a global consensus to put the brakes on AI, and even if we do, people may publicly agree, but development never stops. Thus, countries that pause will only be left behind, giving an advantage to countries like China, and potentially missing out on all the benefits that AI brings to society.”
These AI tools are powerful because they can understand the context of questions and information, explains Baber Amin, COO of Veridium. Amin said elected representatives should be encouraged to establish some form of oversight at the state and federal levels for the responsible use and deployment of AI-based technologies.
“We’ve already seen things go wrong with Microsoft’s chatbots, so self-governance shouldn’t be an option,” Amin said. “Responsible use of AI in search engines should focus on transparency, fairness, user privacy, accuracy, accessibility and social responsibility.”
Marc Rotenberg, founder of the Center for AI and Digital Policy, said in a LinkedIn post that one of the reasons he signed the letter was to help AI developers work with policymakers to create robust AI governance systems. It states that it is because it recommends that it should be accelerated.
Rotenberg should include many of the recommendations from the Center for AI and Digital Policy.
“The AI research community needs to realize that a robust governance framework for AI already exists,” said Rotenberg. “Implementation requires a lot of work. Support from the AI community helps.”