Should AI be regulated?new poll results

AI News


Fans and foes of emerging generative artificial intelligence platforms such as ChatGPT, DALL-E, and Google’s Bard have many strong feelings about what the future holds for these new tools.

And according to data collected in a new Deseret News/Hinckley Political Institute study, Utahans are uniquely positioned when it comes to the advancement of artificial intelligence and what they should or shouldn’t do to regulate its further development. have strong feelings of

Jeffrey Hinton, a British-Canadian scientist and researcher widely considered the “godfather of AI,” recently took a job with Google’s artificial intelligence program so he could be more outspoken about his concerns about new technologies. quit. After a career focused on developing digital neural networks (designed to mimic the way the human brain processes information), Hinton has contributed to breakthroughs in artificial intelligence tools, leading to rapidly advancing AI. He said he had changed his mind about the potential consequences.

“The problem is that even if these things get smarter than us, it’s not clear that we’ll be able to control them,” Hinton said. “There are very few instances where something more intelligent is controlled by something less intelligent.”

In an interview with CBS News in March, Hinton was asked whether AI could destroy humanity.

“It’s not unthinkable,” Hinton said. “That’s all I say.”

In an essay published earlier this month titled “Why AI will save the world,” Marc Andreessen, a leading Silicon Valley venture capitalist, argued against emerging technologies to destroy humanity. fear is embedded in our culture and the potential of AI-based programs. “Coming back to life” to kill us all is like a toaster going on a murderous rampage.

In a June 6 post, Andreessen wrote, “It is my view that the idea that AI will literally destroy humanity is a grave fallacy.” “AI is not a creature, like animals and us, prepared by billions of years of evolution to participate in the battle of the survival of the fittest. Owned by people, used by people, controlled by people.

“The idea that at some point it will develop a mind of its own and determine that it has a motive to kill us is a superstitious gesture.”

A statewide poll of registered voters in Utah, conducted May 22-June 1, found 69% of respondents were somewhat or very concerned about the increased use of artificial intelligence programming. , and 28% said they had little or no concern about advances in artificial intelligence programming. .

Vote_6_6_23_AI_1.jpg

Analysis of responses by political party showed that Republicans and Democrats expressed about the same level of concern, or no concern, about advances in AI, while 76% of female respondents A higher level of anxiety about new tools than 63%.

The poll, conducted by Dan Jones and Associates of 798 registered voters in Utah, has a margin of error of plus or minus 3.46 percentage points.

The concerns about AI reflected by people in Utah are also felt widely among political leaders, with efforts to find regulatory responses to AI advancements well underway in the United States and around the world.

Last month, the U.S. Senate convened a committee hearing, which leaders characterized as the first step in a process leading to new oversight mechanisms for artificial intelligence programs and platforms.

Senator Richard Blumenthal (D-CT), chairman of the U.S. Senate Judiciary Subcommittee on Privacy, Technology, and Law, is co-founder and CEO of OpenAI, which developed ChatGPT. He convened a witness panel that included a Sam Altman. DALL-E and other AI tools.

“Our goal is to demystify and hold accountable these new technologies to avoid some of the mistakes of the past,” Blumenthal said.

Blumenthal said those past failures include the failure of lawmakers to impose stricter regulations on the conduct of social media operators.

“Parliament now has a choice,” Blumenthal said. “When faced with social media, we made the same choices, but failed to capture the moment. pose a danger.

“Congress failed to deliver that moment on social media, and now we owe it to AI before the threats and risks become real.”

Since Altman co-founded OpenAI in 2015 with the backing of tech billionaire Elon Musk, the effort has been a nonprofit research mission with a safety focus, according to the Associated Press. From an institution to a business. Microsoft has invested billions in the startup and integrated its technology into its own products such as the search engine Bing.

Altman was quick to agree with committee members that a new regulatory framework was needed as the AI ​​tools his company and others were developing continued to make leaps in evolution. He also warned that AI could cause widespread harm as it continues to evolve.

“My biggest fear is that our sector of the technology industry is doing great harm to the world,” Altman said. “I think it can happen in a number of ways. If this technology doesn’t work out, we think it can go very wrong, and we want to be very vocal about it.

“We want to work with the government to prevent that from happening, but we have a clear eye on what the downsides are and what we must do to mitigate them. I am trying to see it.”

People in Utah seem to have mixed feelings about the increased government regulation on AI tools. Several poll participants (43%) said they wanted more regulation, but 19% said that AI regulation should be relaxed and 26% said the status quo should be maintained.

Vote_6_6_23_AI_2.jpg

Republicans and Democrats were nearly equally supportive of more government regulation of AI, but more Republicans than Democrats (22% vs. 12%) want less regulation.

As for what level of government regulatory oversight of artificial intelligence advances should be involved, a challenge reflected in the current hodgepodge of regulatory efforts by both state and federal legislatures, poll participants A majority, 53%, said the federal government should provide regulatory oversight. responsible person. Twenty-two percent of respondents also believe state governments should oversee AI, while 17% said governments should not be involved in regulating tech companies working on artificial intelligence.

Vote_6_6_23_AI_3.jpg

Both Hinton and Altman signed a one-sentence open letter published last month by the nonprofit AI Safety Center, which was endorsed by a broad group of eminent scientists, academics and technology developers. there is

“Reducing the risk of AI-induced extinction should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement reads.

But Andreessen noted that there are likely some global players to show off their supranational efforts to build regulatory protections, and that a light regulatory approach is the best way forward. I believe.

Instead, Andreessen said the best way forward is to enable both the big AI players and the upstarts in the space to “build AI as quickly and aggressively as possible.” . And he believes that public-private partnerships are the best tool to prepare for the inevitable exploits of advanced artificial intelligence technologies and to take full advantage of their advancements.

“To offset the risk of bad actors using AI to do bad things, governments should work with the private sector to aggressively address each area of ​​potential risk and leverage AI to maximize society’s defensive capabilities. needs to be transformed,” Andreessen wrote. “This is not limited to risks posed by AI, but also applies to more general problems such as malnutrition, disease and climate. We need to embrace AI that way.”





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *