Did OpenAI’s CEO Voluntarily Call for Regulation of AI?

AI For Business


ChatGPT: Did OpenAI's CEO Voluntarily Call for Regulation of AI? Here's Why
  • OpenAI CEO Sam Altman testified for the first time since ChatGPT’s popularity exploded.
  • Senators appear to have accepted Mr. Altman’s warning that AI could “cause great harm to the world,” and his suggestion that a new agency could set the rules.
  • Altman admitted he was concerned about the impact of AI on elections.

That’s right. His CEO of OpenAI appeared in Congress earlier this week and told US lawmakers the regulation said: artificial intelligence (AI) is essential and necessary. “If this technology doesn’t work out, it could go all the wrong way,” Sam Altman said. He made his first public appearance in Parliament on May 16.

The CEO of OpenAI, which owns ChatGPT, said: Sensational Generative AI Chatbottestified before a U.S. Senate committee on Tuesday. He was the latest person to jump out of Silicon Valley. But unlike other CEOs, from Facebook’s Mark Zuckerberg to TikTok’s Shaw Zi Chu, Altman was greeted with a much warmer and more serious demeanor.

Altman spoke positively about the possibilities and pitfalls of new technologies. But to my surprise, the senators present seemed rather willing to take his warning. OpenAI’s CEO reiterated how AI could “cause great harm to the world,” accompanied by a plea for some. regulatory guardrails for this new technology.

How did OpenAI’s CEO’s will come about?

WASHINGTON, DC - MAY 16: Sen. Cory Booker (Republican) as OpenAI CEO Samuel Altman testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and Law on May 16, 2023 in Washington, D.C. , New Jersey Democrat) asks.  DC. The commission held oversight hearings to consider AI, with a focus on artificial intelligence rules. Win McNamee/Getty Images/AFP (Photo credit: WIN MCNAMEE / GETTY IMAGES NORTH AMERICA / Getty Images via AFP)

WASHINGTON, DC – MAY 16: Senator Cory Booker (R) (D-NJ) to support OpenAI at the Senate Judiciary Subcommittee on Privacy, Technology, and Law on May 16, 2023 in Washington. Ask questions as CEO Samuel Altman testifies. DC. The commission held oversight hearings to consider AI, with a focus on artificial intelligence rules. Win McNamee/Getty Images/AFP (Photo credit: WIN MCNAMEE / GETTY IMAGES NORTH AMERICA / Getty Images via AFP)

Altman attended a Senate Judiciary Subcommittee hearing, and a simple but difficult question was at the top of the agenda. “What is AI?” regulate technologyespecially when it comes to something as complex and fast-moving as AI, Congress needs to understand it first.

So the idea of ​​having the CEO of OpenAI, the Microsoft-backed startup backing ChatGPT, provide insight, is the best move for lawmakers. What’s more, it was the Senate’s first central hearing on AI. “As this technology advances, we understand that people have concerns about how it could change our lives. So do we,” said OpenAI. CEO said at a Senate hearing.

Other senators echoed South Carolina Republican Lindsay Graham’s likening of AI technology to nuclear reactors that need to be licensed and answered by regulators.

“I will form new agency It licenses any initiative that exceeds a certain scale of capability, and that license can be revoked to ensure compliance with safety standards,” Altman said. bloomberg Report; he added that such US authorities could form a global consensus on AI regulation.

In response, lawmakers in attendance said Congress was moving too slowly to keep up with the pace of innovation, and rulemaking in such a dynamic industry was best left to a new agency, especially when it came to AI. agreed with the opinion of

Connecticut Democratic Senator Richard Blumenthal, who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology, and law, said AI companies test systems before releasing them and disclose known risks. I thought I should be obliged to do it. Blumenthal also expressed concern about future AI systems destabilizing the job market.

Altman largely agreed, but was more optimistic about the future of work. What is certain is that his CEO of OpenAI himself seemed haunted by his own biggest fears about the technology. Altman shied away from details at best, admitting that the industry could cause “significant harm to the world” and that “if this technology doesn’t work, it could go very wrong.”

But he then proposed that new regulators impose safeguards to stop AI models that can “self-replicate and self-exfiltrate into nature.” Altman even admitted he was concerned about the impact technology could have on elections. “This is not social media. This is not. So we need a different response.”

When companies like OpenAI reach the stage of debating whether development should be halted, Generation AI Senators, like hearing witnesses, said it was unwise to pause American innovation. At the same time, competitors such as China are also pursuing AI innovation.

However, Altman clarified that OpenAI has no plans yet to push the next version of its critical language model-based tool. “We are not currently training to become GPT-5,” he said, adding that there are no plans to begin within the next six months.









Source link

Leave a Reply

Your email address will not be published. Required fields are marked *