What would the former Google CEO think if AI gained free will?

AI For Business


Former Google CEO Eric Schmidt has invested in a number of AI and science startups.
Eugene Gologulsky/Getty

  • At VivaTech in Paris, Eric Schmidt made some disturbing predictions about the dangers of AI.
  • The former Google CEO said that if computers gained free will, “we would turn them off.”
  • He added that the risk of cyber and biological attacks would become a reality in the next three to five years.

Eric Schmidt spoke at the annual VivaTech conference in Paris on Wednesday and made some disturbing predictions about AI.

Since leaving Google, the former CEO has invested in a number of artificial intelligence startups and says AI regulation should strike a balance so as not to stifle innovation.

Schmidt acknowledged that advances in AI pose dangers, but said the biggest threats are yet to come. And if those threats do materialize, Schmidt seems to think the world has the tools to deal with them.

“Now, if computers had free will, what do you think we would do?” Schmidt said at the conference. “We would turn them off.”

“Let's see who pulls the plug on who,” responded Yoav Shoham, co-founder and co-CEO of AI21 Labs, who spoke with Schmidt at the event.

Admittedly, the idea of ​​rushing to cut off AI systems once they acquire free will—and then acting in time if that happens—isn't a very comfortable thought experiment, but Schmidt said researchers are conducting detailed assessments of AI dangers, and “the answer is, we know when danger is coming.”

It's worth noting that the former Google CEO has been invested in efforts to combat AI risk. Schmidt partnered with OpenAI to launch a $10 million grant program to support technical research by the company's SuperAlignment team, which specializes in managing AI-related risks. Despite the team's dissolution last week, OpenAI plans to continue moving forward with the grant program, a spokesperson told Business Insider.

At the moment, Schmidt says that the current form of AI is that Other than disinformation, he said, it's dangerous: disinformation is “out of control” and causes “real problems for democracy.”

Over the past few years, with the advent of AI, disinformation has become a bigger problem. Recent research on Meta and OpenAI systems has shown that various AI systems have systematically learned to “instill false beliefs in others in order to achieve outcomes other than the truth.”

Deepfakes have also become a major problem, with AI-generated porn featuring impersonations of celebrities and political leaders. There have also been reports of AI-generated phone calls posing as messages from President Joe Biden. In 2022, fraudsters pleaded guilty to using targeted robocalls to dissuade voters from voting by mail.

Schmidt said the real dangers to large language models are cyber and biological attacks, which haven't happened yet, but “will happen within the next three to five years,” he said.

Schmidt did not immediately respond to a request for comment.

On February 28, Axel Springer, the parent company of Business Insider, along with 31 other media groups, filed a $2.3 billion lawsuit against Google in a Dutch court, alleging damages caused by the company's advertising practices.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *