Bill Gates, AI Developer Responds To Musk, Wozniak Open Letter

AI For Business

Wynn McNamee | Getty Images

If you’ve been hearing a lot of pro-AI chatter these days, you’re probably not alone.

AI developers, prominent AI ethicists, and even Microsoft co-founder Bill Gates have defended their work over the past week. This is in response to an open letter signed by Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, published last week by the Future of Life Institute, to develop an AI system that can compete with the human level. he is asking to stop for six months. Intelligence.

The letter, which now has more than 13,500 signatures, said the “dangerous race” to develop programs such as OpenAI’s ChatGPT, Microsoft’s Bing AI chatbot, Alphabet’s Bard, and other programs had been forced by widespread disinformation. Until the transfer of human jobs, it has expressed concern that it may have negative consequences. to the machine.

But the tech industry’s larger industries, including at least one of the biggest celebrities, are pushing back.

Gates told Reuters on Monday, “I don’t think asking a specific group to suspend will solve the problem. Gates added that it would be difficult to enforce a moratorium across global industries. , he agreed the industry needed more research to “identify the tricky areas.”

That’s what makes the debate interesting, experts say: The open letter may cite some legitimate concerns, but its proposed solutions seem impossible to achieve. .

We’ll explain why, and what’s next, from government regulation to a potential robot uprising.

The open letter’s concern was relatively simple: “In recent months, AI labs have been embroiled in an out-of-control race to create a more powerful digital world that no one — not even its creators — can understand, predict, or reliably control.” Developing and deploying minds.”

AI systems often come with programming biases and potential privacy issues. It can spread misinformation widely, especially if used maliciously.

And it’s easy to imagine companies trying to cut costs by replacing human jobs with AI language systems, from personal assistants to customer service representatives.

Italy has temporarily banned ChatGPT due to privacy concerns stemming from the OpenAI data breach. The UK government issued regulatory recommendations last week, and the European Consumer Organization has also called on lawmakers across Europe to tighten regulations.

In the United States, some members of Congress are calling for new laws regulating AI technology. The Federal Trade Commission last month issued guidance for companies developing such chatbots, suggesting the federal government is paying attention to AI systems that could be used by fraudsters. I’m here.

Multiple state privacy laws passed last year also require companies to disclose how and when their AI products work, and require customers to use personal data for AI-automated decision-making. is intended to give you the opportunity to opt out of providing

These laws are currently in force in California, Connecticut, Colorado, Utah and Virginia.

At least one AI safety and research firm is not yet concerned: Current technology does not “raise immediate concern”.

Anthropic, which received a $400 million investment from Alphabet in February, has its own AI chatbot. In a blog post, the company said future AI systems could be “much more powerful” in the next decade, and building guardrails could “reduce risk” in the future. .

Problem: No one knows what these guardrails will or should look like, writes Anthropic.

A company spokesperson told CNBC Make It that the open letter feature, which encourages conversations about topics, is useful. A spokesman declined to say whether Anthropic would support his six-month suspension.

In a tweet Wednesday, OpenAI CEO Sam Altman said an “effective global regulatory framework, including democratic governance” and “ample coordination” among leading artificial general intelligence (AGI) companies would help. I acknowledge that it is possible.

However, Altman, who created ChatGPT at the Microsoft-funded company and helped develop Bing’s AI chatbot, did not specify what those policies entailed, noting CNBC Make It’s request for comment on the open letter. didn’t even respond.

Some researchers have raised another question. Pausing research could hinder progress in a rapidly changing industry and allow authoritarian states developing their own AI systems to thrive.

Highlighting the potential threats of AI could encourage bad actors to adopt the technology for malicious purposes, says AI researcher and emerging AI-assisted search engine Richard Socher, CEO of, a company.

exaggerate the immediacy of those threats feed unnecessary hysteria On this topic, Sorcher says. He added that the open letter’s proposal “cannot be enforced and addresses the problem at the wrong level.”

The sluggish response to open letters from AI developers seems to indicate that neither tech giants nor startups are likely to voluntarily stop working.

A letter calling for greater government regulation seems more likely, especially since lawmakers in the US and Europe are already demanding transparency from AI developers.

In the U.S., the FTC also encourages AI developers to only train new systems using datasets free of misinformation and implicit biases, and to increase testing before and after their products are released to the public. Mandatory rules can also be enacted. to his December recommendation from law firm Alston & Bird.

Stuart Russell, a computer scientist and leading AI researcher at the University of Berkeley, who co-signed the open letter, says such efforts need to take place before the technology advances further.

Russell told CNN on Saturday that the suspension will give tech companies more time to prove that their advanced AI systems “do not pose undue risk.” I can.

Both sides seem to agree on one thing. The worst-case scenario of rapid AI development is worth avoiding. In the short term, it means providing transparency to users of AI products and protecting them from fraudsters.

In the long term, this could mean keeping AI systems from surpassing human-level intelligence and maintaining the ability to effectively control it.

Gates told the BBC in 2015:

Don’t Miss: Want to be smarter and more successful with your money, work, and life? Sign up for our new newsletter!

Take this survey to tell us how you want to take your money and career to the next level.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *