AI Could Threaten Humanity Within Two Years, Advisor To UK AI Task Force Warns

AI News

An adviser to the British Prime Minister’s Artificial Intelligence (AI) Task Force has said it will take about two years for humans to control and regulate AI before it becomes too powerful.

Matt Clifford, who also heads the UK’s Agency for Advanced Research and Invention (ARIA), stressed in an interview with local UK media that the current system is “increasing its capabilities at an ever-increasing rate”.

He went on to say that if authorities don’t consider safety and regulation now, the system will be “very strong” within two years.

“It will take us two years to put in place a framework that is far more capable of both controlling and regulating these very large models than we are today.”

Clifford warned that there are “all kinds of risks” in the short and long term when it comes to AI, calling it “pretty scary.”

The interview follows an open letter recently published by the Center for AI Safety, signed by 350 AI experts, including OpenAI CEO Sam Altman, stating that AI It should be treated as an existential threat similar to that posed by nuclear weapons and pandemics, he said.

“They’re talking about what would happen if we created a new species with virtually superior intelligence to humans.”

Advisors to the AI ​​Task Force said these threats posed by AI could be “extremely dangerous” and that “from where we expect the model to be two years from now, many, but not all humans He may die,” he said.

Related: AI-Related Cryptocurrency Revenue Increases Up To 41% After ChatGPT Launch: Study

Clifford said the main focus for regulators and developers is to understand how to control the model and introduce regulation on a global scale.

For now, he said the biggest concern is the lack of understanding why AI models behave the way they do.

“The people who build the most capable systems frankly admit that they don’t understand exactly how they work. [AI systems] show behavior like theirs. ”

Clifford stressed that many leaders of organizations building AI also agree that strong AI models should undergo some sort of audit and evaluation process before deployment.

Regulators around the world are now scrambling to understand this technology and its implications, creating regulations that enable innovation while protecting users.

On June 5, European Union officials even suggested that all AI-generated content be labeled as such to prevent disinformation.

In the UK, the leading opposition Labor Party MP echoed the sentiments mentioned in the AI ​​Safety Center letter, saying technology should be regulated in the same way as health care and nuclear power.

magazine: AI Eye: 25,000 traders bet on ChatGPT stock selection, AI is bad at throwing dice and more

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *