ChatGPT Head Says Artificial Intelligence Should Be Regulated by U.S. or Global Agencies

AI News


The head of the artificial intelligence company behind ChatGPT said in parliament on Tuesday that government intervention is essential to mitigate the risks of increasingly powerful AI systems.

“As this technology advances, we understand that people are anxious about how it will change our lives. So are we,” said OpenAI’s chief executive. CEO Sam Altman said at a Senate hearing.

Altman proposed the creation of a US or global agency with the power to license the most powerful AI systems, “de-license them and ensure compliance with safety standards.”

His San Francisco-based startup quickly made headlines after releasing ChatGPT late last year. ChatGPT is a free chatbot tool that answers questions with compelling, human-like responses.

What started as a panic among educators about ChatGPT being used to cheat homework has expanded into broader concerns about the ability of the latest “generative AI” tools to mislead people and spread falsehoods.violates copyright protection and flip some jobs.

And there is no immediate indication that parliament will develop comprehensive new AI rules, as European parliamentarians are doing.Social Concerns Bring Altman And Other Tech CEOs To The White House Earlier this month, he made a US government agency promise to crack down on harmful crimes. AI products that violate existing civil rights and consumer protection laws.

Connecticut Democratic Senator Richard Blumenthal, who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology, and law, opened the hearing in a taped speech that sounded like a senator, but actually was a voice clone trained on Blumenthal’s floor speeches and reciting ChatGPT. – Written opening greeting.

Blumenthal said the results were impressive, but “what if I had asked it and it was an expression of support for the surrender of Ukraine or the leadership of[Russian President]Vladimir Putin? was it?” he added.

The overall tone of Tuesday’s senator’s question was polite, as tech companies and social media executives have come under fire in the past for the industry’s failure to manage data privacy and combat harmful misinformation. was in contrast to the congressional hearings of That’s partly because both Democrats and Republicans said they were interested in seeking Mr. Altman’s expertise to avoid problems that haven’t happened yet.

Blumenthal said AI companies should be required to test their systems and disclose known risks before releasing them, and how future AI systems could destabilize the job market. expressed particular concern about the potential for Altman largely agreed, but was more optimistic about the future of work.

Pressed about his biggest concern about AI, Altman said the industry could cause “significant harm to the world” and that “if this technology doesn’t work, it could go very wrong.” He avoided much specifics, other than stating,

But then, he suggested that new regulators should impose safeguards to stop AI models that could “self-replicate and self-exfiltrate into the natural world,” allowing humans to be manipulated and handed over control. Alluded to future concerns about highly sophisticated AI systems.

Co-founded by Altman in 2015 with the backing of tech billionaire Elon Musk, OpenAI has evolved from a nonprofit research institute with a safety-focused mission to a business. The company’s other popular AI products include image maker DALL-E. Microsoft has invested billions in the startup and integrated its technology into its own products such as the search engine Bing..

This month, Mr. Altman will embark on a world tour of capitals and major cities on six continents to talk to policymakers and the public about technology. The night before his Senate testimony, he dined with dozens of US lawmakers, some of whom told CNBC. They were impressed by his comments.

Christina Montgomery, IBM’s Chief Privacy Trust Officer, and NYU Professor Emeritus Gary Marcus, one of a group of AI experts who called on OpenAI and other tech companies to pause development, also testified. . Six months to develop stronger AI models, giving society time to consider risks. The letter was in response to his March release of OpenAI’s latest model, GPT-4.described as more powerful than ChatGPT.

A leading Republican on the committee, Sen. Josh Hawley of Missouri, said the technology would have significant implications for elections, jobs and national security. He said Tuesday’s hearing was “an important first step in understanding what Congress should do.”

Many tech industry leaders say they welcome some form of AI oversight, but warn against what they see as overly heavy-handed rules. Both Altman and Marcus called for an AI-focused regulator, preferably international, with Altman citing the precedent of the United Nations Atomic Energy Agency and Marcus likening it to the U.S. Food and Drug Administration. rice field. But IBM’s Mr. Montgomery called on Congress to take a “precise regulation” approach instead.

“Inherently, we believe AI should be regulated at the point of risk,” Montgomery said, by establishing rules governing the deployment of specific uses of AI rather than the technology itself. .



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *