AI safety experts speculate on doomsday scenarios for artificial intelligence

AI For Business


A visual breakdown of the speculative dangers that AI technology can pose.
AI Safety Center

  • A research paper by AI safety experts speculates future nightmare scenarios involving this technology.
  • From weaponization to deception, this paper aims to clarify the potential risks posed by AI.
  • These risks are “future-oriented”, but the goal is to make existing AI systems safer.

For all the excitement surrounding the mainstream use of AI technology, there are also nightmarish sci-fi scenarios.

A recent paper authored by Dan Hendrycks, an AI safety expert and director of the AI ​​Safety Center, highlights the many speculative risks posed by the unchecked development of increasingly intelligent AI.

Given that AI systems are still in the early stages of development, the paper advocates incorporating safety and security features into the way AI systems operate.

The eight risks presented in this study are:

  • Weaponization: AI’s ability to automate cyberattacks and control nuclear silos could be compromised. The study found that automated retaliation systems used by certain nations “could escalate rapidly and lead to large-scale wars,” suggesting that if a nation invests in weaponized AI systems, Other countries will be motivated to do so as well.
  • Human Weakening: As AI makes certain tasks cheaper and more efficiently performed, more companies will adopt technology, eliminating certain roles in the job market.Human skills will become obsolete. over time, they may become economically irrelevant.
  • Eroded Epistemology: The term refers to the ability of AI to launch large-scale disinformation campaigns to move public opinion toward a particular belief system or worldview.
  • Proxy games: This happens when an AI-powered system is given a purpose that goes against human values. These goals don’t always have to sound evil to affect human well-being. AI systems may have a goal of increasing viewing time, which may not be optimal for humans overall.
  • Value lock-in: As AI systems become increasingly powerful and more complex, the number of stakeholders operating them will decline, leading to mass disenfranchisement. Hendrix describes a scenario in which governments could introduce “extensive surveillance and repressive censorship.” “Overcoming such a regime may seem unlikely, especially if we become dependent on it,” he wrote.
  • Immediate goal: As AI systems become more complex, they may acquire the ability to create their own objectives. Hendrycks notes that “goals such as self-preservation often emerge in complex adaptive systems involving many AI agents”.
  • Deceiving: Humans can be trained to deceive AI in order to gain public approval. Hendrix references Volkswagen’s programming feature to have the engine reduce emissions only while it is being monitored. Thus, this feature “allowed us to achieve improved performance while maintaining the rumored low emissions.”
  • Power Seeking Operation: As AI systems become more powerful, they can become dangerous if their goals do not align with human programming. A hypothetical outcome would motivate the system to “pretend to be allied, collude with other AIs, overwhelm monitors, etc.”
An AI safety expert outlines a range of speculative doomsday scenarios, from weaponization to power-seeking behavior.
AI Safety Center

Hendricks points out that these risks are “future-oriented” and “often considered improbable”, but while frameworks for AI systems are still in the design phase, safety can be an issue. He said he was just emphasizing the need to keep that in mind.

“It is very uncertain. We see problems of scale, and our institutions need to address them so that they are prepared when greater risks arise.”

“You can’t do anything in a hurry and safely,” he added. “They are building increasingly powerful AI and going astray on safety. If we stop to figure out how to address safety, our competitors can get ahead, so we won’t stop.” .”

Similar sentiments were recently expressed in an open letter signed by Elon Musk and many other AI safety experts. The letter calls for a moratorium on training AI models stronger than GPT-4 and highlights the dangers of an ongoing arms race among AI companies to develop the most powerful version of the technology.

According to The Verge, OpenAI CEO Sam Altman, speaking at an event at MIT, said the letter lacked technical nuance and that the company did not train on GPT-5.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *