- Warnings about the potential dangers of advanced AI have increased in recent months.
- Some of these statements are ambiguous, and experts disagree about what exactly the main risks are.
- These are some of the potential threats posed by advanced AI and how we think about those risks.
AI is as dangerous as nuclear war or a global pandemic.
This is due to the latest warning issued by the Center for AI Safety (CAIS). The statement is backed by major players in the AI industry, including Sam Altman, head of ChatGPT creator OpenAI.
This warning is one of many issued in recent months. Some of the early creators of this technology argue that we are headed for human ruin, while others warn that regulation is desperately needed.
Some of these statements people are having trouble making sense of the increasingly exaggerated claims.
David Krueger, an AI expert and assistant professor at the University of Cambridge, said that while people may want specific scenarios regarding the existential risks of AI, it remains difficult to point to them with any degree of certainty. Stated.
“In terms of knowing exactly what the threats are, I’m not worried because there are immediate threats, but I don’t think there’s a lot of time to prepare for the potential threats ahead,” he said. told an insider.
With that in mind, here are some of the potential issues experts are concerned about.
1. Takeover by AI
One of the most commonly cited risks is AI losing control of its creator.
Artificial general intelligence (AGI) refers to AI that is as smart as or smarter than humans in a wide range of tasks. Current AI systems are not sentient, but they are made like humans. ChatGPT, for example, is built to make users feel like they’re chatting with another person, Janis Wong told Alan of his Turing Institute.
Experts disagree on how exactly to define AGI, but they generally agree that the potential technology poses a danger to humanity and requires research and regulation, Insider said. reported Aaron Mok of
Kruger said the most obvious example of these dangers is military competition between nations.
“Military competition with autonomous weapons (systems that, by design, have the ability to influence the physical world and cause harm) shows how such systems will kill so many people. has become more apparent,” he said.
“In the future, when we have advanced systems that are smarter than humans, I think it is very likely that the AI all-out scenario will go out of control and everyone will die as a result,” he said. rice field. Added.
2. AI will cause mass unemployment
There is a growing consensus that AI is a threat to some jobs.
Abhishek Gupta, founder of the Montreal Institute for AI Ethics, said the prospect of job losses from AI was the most “real, imminent, perhaps imminent” existential threat.
“We need to look at the lack of purpose people will feel when they lose a ton of jobs,” he told an insider. “The existential part of it is what people do and where they get their purpose from.”
“Work isn’t everything, but it’s a big part of our lives,” he added.
CEOs are starting to talk candidly about their plans to leverage AI. IBM CEO Arvind Krishna, for example, recently announced that the company would delay hiring for jobs that could be replaced by AI.
“Four or five years ago no one would have said something like that and taken it seriously,” Gupta said of IBM.
3. AI Bias
Systemic bias could be a serious risk if AI systems are used to aid broader social decision-making, experts told insiders.
There are already some examples of bias in generative AI systems, including early versions of ChatGPT. You can read some of the shocking responses from the chatbot here. OpenAI has added more guardrails to help ChatGPT avoid questionable answers from users who request offensive content into the system.
Generative AI image models can create harmful stereotypes, according to a test conducted earlier this year by Insider.
Gupta said undetected biases in AI systems used for real-world decision-making, such as benefit approval, can have serious consequences.
According to Wong, training data is often based primarily on English data, and there is limited funding to train other AI models in different languages.
“So either a lot of people will be excluded or training in certain languages will be less successful than others,” she says.
Watch Now: Top Insider Inc. Videos
Loading…
