OpenAI CEO responds to Musk and Wozniak letter calling for AI pause

AI Video & Visuals


  • OpenAI CEO Sam Altman says an open letter from dozens of academics and researchers, as well as Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, lacks “technical nuances.” I thought that
  • Altman said he agreed that better safety guidelines were needed.
  • But Altman continued, letters weren’t the right way to do it.

Y Combinator President Sam Altman

Patrick T. Fallon | Bloomberg | Bloomberg | Bloomberg | Bloomberg |

OpenAI CEO Sam Altman said he agreed with parts of an open letter from the Future of Life Institute signed by tech leaders like Tesla CEO Elon Musk and Apple co-founder Steve Wozniak. I asked for a six-month suspension of AI research, but the letter “was largely devoid of technical nuance as to where the suspension was necessary.”

Altman made the remarks during a video appearance on Thursday at an MIT event discussing business and AI.

OpenAI creates ChatGPT, an AI bot that can create human-like responses to questions from users. Bots have sparked an AI frenzy in the tech world. Microsoft uses OpenAI technology in its Bing chatbot, and Google recently launched its competitor, Bard.

“I think it’s really important to act cautiously and be tougher on safety issues,” Altman continued. “I don’t think the letter was the best way to address it.”

In March, Musk, Wozniak, and dozens of other academics called for an immediate pause in training “experiments” related to large-scale language models “more powerful than GPT-4.” The flagship Large Language Model (LLM). Since then, over 25,000 people have signed the letter.

OpenAI’s GPT technology gained international attention when ChatGPT launched in 2022. GPT technology underpins Microsoft’s Bing AI chatbot and has fueled a surge in AI investment.

“AI Labs and independent experts will use this pause to jointly develop a set of shared safety protocols for advanced AI design and development that will be rigorously audited and supervised by independent external experts. should be developed and implemented in ,” the letter said.

“I also agree that as functionality becomes more and more serious, safety standards must be raised,” Altman said at the MIT event.

Earlier this year, Altman admitted that AI technology “scared” him a little. Questions about the safe and ethical use of AI are being raised in the White House, the Capitol, and boardrooms across America.

“We’re doing other things in addition to GPT-4 that we think are important to address and have all sorts of security issues that are completely left out of the letter,” OpenAI said. a senior official said.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *