Focus on AI risks, not existential threats

AI For Business


Artificial intelligence (AI) has become a global topic over the past few months as a result of the widespread adoption of AI-based generation tools such as chatbots and automatic image generators. Prominent AI scientists and engineers have expressed concern about the hypothetical existential risks posed by these developments.

Having worked in the field of AI for decades, we are amazed at this rise in popularity and subsequent sensationalism. The purpose of this article is not to antagonize, but to balance public perceptions that seem disproportionately dominated by fear of speculative AI-related existential threats.

It is not our position to say that we cannot or should not be concerned about more specific risks. As members of the European Institute for Learning and Intelligent Systems (ELLIS), a research-focused organization focused on machine learning, we are committed to putting these risks into perspective, especially in the context of government agencies considering regulatory action. I feel that it is our role to capture it. Input from technology companies.

What is AI?

AI is a field of computer science or engineering that took shape in the 1950s. The goal is to build an intelligent computational system, taking human intelligence as a reference. As human intelligence is complex and diverse, there are many areas within artificial intelligence that aim to emulate aspects of human intelligence, from perception to reasoning to planning to decision-making. there is.

According to their level of competence, AI systems can be categorized into three levels.

  1. Narrow or weak AI. Refers to an AI system that can perform a specific task or solve a specific problem. Today, they often have performance levels superior to humans. All current AI systems are narrow AI. Examples include chatbots like chatGPT, voice assistants like Siri and Alexa, image recognition systems, and recommendation algorithms.
  2. General or powerful AI. Refers to AI systems that exhibit human-like levels of intelligence, including the ability to understand, learn, and apply knowledge across a wide range of tasks, and the ability to incorporate concepts such as consciousness. AI in general is mostly hypothetical and has not been realized to date. t
  3. Super AI refers to AI systems that have intelligence that exceeds human intelligence in any task. By definition, humans are incapable of comprehending this kind of intelligence in the same way that ants are incapable of comprehending human intelligence. Super AI is an even more speculative concept than general AI.

AI can be applied to everything from education to transportation to medicine to law to manufacturing. In this way, all aspects of society are undergoing profound changes. Even a form of “narrow AI” has enormous potential to help generate sustainable economic growth and tackle the most pressing challenges of the 21st century, such as climate change, pandemics and inequality.

Challenges posed by today’s AI systems

The introduction of AI-based decision-making systems in a wide range of areas, from social media to the labor market, over the past decade has also created significant societal risks and challenges that need to be understood and addressed.

The recent advent of sophisticated large-scale generative pretrained transformer (GPT) models exacerbates many of the existing challenges, while also creating new ones that require attention. These tools are being adopted by hundreds of millions of people around the world at unprecedented scale and speed, putting additional stress on our social and regulatory systems.

There are some very important issues for us to prioritize.

  • Manipulation of human behavior by AI algorithms. It has potentially devastating social consequences in the spread of misinformation, in shaping public opinion, and in the outcome of democratic processes.
  • Stereotypes, patterns of discrimination, and even algorithmic bias and discrimination that not only perpetuate but exacerbate oppression.
  • Lack of transparency in both models and their uses.
  • Violation of privacy and use of large amounts of training data without consent or compensation from the creator.
  • Exploitation of workers to annotate, train, and modify AI systems. Many of them work in developing countries for low wages.
  • The huge carbon footprint of the massive data centers and neural networks required to build these AI systems.
  • Generative AI systems that invent believable content (images, text, audio, video) without corresponding to the real world lack veracity.
  • Vulnerability of these large models where mistakes can be made or cheated.
  • Job or occupation replacement.
  • The concentration of power in the hands of the oligopoly that controls today’s AI systems.

Is AI really an existential threat to humanity?

Unfortunately, public conversations, especially recent open letters, have focused primarily on the hypothetical existential risks of AI, rather than focusing on these concrete risks.

Existential risk refers to potential events or scenarios that represent threats to human survival, with consequences that may cause irreversible damage or destruction to human civilization and ultimately lead to human extinction.

A global catastrophe (such as an asteroid impact or a pandemic), the destruction of a habitable planet (due to climate change, deforestation, depletion of critical resources such as water and clean air), or a global nuclear war could pose an existential threat. An example. .

Our world certainly faces many risks and future developments are difficult to predict. In the face of this uncertainty, we must prioritize our efforts. Therefore, the remote potential of uncontrolled superintelligence needs to be viewed in context. This includes the context of her 3.6 billion people in a world highly vulnerable to climate change. About 1 billion people live on less than $1 a day. Or two billion people affected by conflict. They are real human beings whose lives are in serious danger today, and the danger was never caused by a super AI.

Focusing on hypothetical survival risks distracts attention from the serious and documented challenges that AI poses today, fails to encompass the diverse perspectives of the broader research community, and causes people to panic unnecessarily. may cause

By incorporating the necessary diversity, complexity and nuances of these problems, and by designing concrete, coordinated and viable solutions to address today’s AI challenges, including regulation, society will certainly benefit. will receive

Addressing these challenges requires the cooperation and engagement of the most affected sectors of society, along with the necessary technical and governance expertise. Now is the time to act with ambition, wisdom, and collective action.


The author of this article is a member of the European Lab for Learning & Intelligent Systems (ELLIS) Board of Directors. Nuria Oliver, Director of the Ellis Alicante Foundation, University of Alicante, Professor Emeritus of the University of Alicante. Bernhard Schölkopf, Max Planck Institute for Intelligent Systems, Florence d’Alché-Buc, Professor of Telecom Paris – Mining Telecom Institute. Dr. Nada Labrack, Research Councilor, Knowledge Technology Division, Joseph Stefan Institute, Professor, University of Nova Gorica. Niccolò Chesa Bianchi, Professor at the University of Milan.Sepp Hochleiter, Johannes Kepler University Linz, Serge Belongy, Professor, University of Copenhagen

This article is republished from The Conversation under a Creative Commons license. Please read the original article.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *