Anthropic says the only way to stop bad AI is to use good AI

AI For Business


The fatalism of artificial intelligence (AI) has briefly arrived.

It means “Humanity will be destroyed” rather than “I Robot”.

If the industry’s negative hype is to be believed, chances are not zero. AI abuse It could lead to human extinction in just a few years (5 to 10 years by many observers’ estimates).

Yes, industry insiders argue that disruptive new models are simply: powerful and really evolving fast.

Well-funded AI startups human The latest version was released on Tuesday (July 11th). Claude 2The company’s answer to Open AIis lively Chat GPT chatbot and alphabetof bard product.

Anthropic AI bot Designed to be “helpful, harmless and honest”.

The company is a public benefit corporation and was founded in 2021 by OpenAI executives who spun off to form their own AI company due to concerns that OpenAI was growing too commercially.

CEO Dario Amodei He led the team that built ChatGPT-2 and ChatGPT-3 for OpenAI, and his sister Daniela Amodei Previously overseeing OpenAI’s policy and safety team, he is now president of Anthropic.

“If this technology doesn’t work, going in the wrong direction,” OpenAI CEO Sam Altman previously said.

But are good guys with better AI chatbots the only way to stop so-called bad guys with harmful AI chatbots?

See also Generative AI Is Eating The World — How To Avoid Indigestion

Kindness as a Competitive Advantage

Anthropic has raised over $1 billion fundraising from investors including Google and Salesforce$500 million of its capital comes from allegedly criminally run and failed cryptocurrency exchanges, FTX.

A startup with only about 160 people employeeaccording to The New York Times, it requires access to a very large amount of capital to handle it. AI development This is notoriously expensive, requiring massive data centers with a degree of computing power unthinkable just a few years ago.

Despite its small size, Anthropic is seen as a leader within the industry and a rival to much larger tech giants, according to a New York Times report — as well as its management pedigree. That is one reason.

The startup’s latest chatbot is built on a Large Language Model (LLM) like its competitors, but remains disconnected from the system. the internet — unlike Alphabet’s Bard — has only been trained on data up to December 2022, The Times of India reported.

However, the limit is entirely by design.

Claude 2 is no different than any other chatbot by any stretch of the imagination, but Anthropic claims its product is less likely to do harm than those trained and commercialized by its competitors. increase.

reason? Claude is trained to get better through reinforcement learning and is built on top of a model architecture known as Constitution AI, give the AI ​​model a written list of principles and instruct it to follow those principles. She then imposes oversight of the first model on her second model based on compliance with the principles.

Enterprises can access Claude 2 via an application programming interface (API), and individual users can try it out in the same way as ChatGPT, Bard, and other AI products.

Also read: The biggest benefit of Enterprise AI is that it guides companies in both directions

Is it a contradiction to be the safest AI juggernaut?

Anthropic’s mission of safety-first AI has helped to polish the company’s image and garner goodwill from federal regulators, but more cynical industry observers have begun to worry about so-called mission drift. , suggesting AI companies are fueling public fatalism as a backdoor marketing tactic, according to a New York Times report.

After all, OpenAI was once a nonprofit founded with a mission similar to Anthropic.

But as companies grow, they eventually need commercialization.

Anthropic raises up to $5 billion over the next two years to AI model It’s up to 10x more powerful than the current Claude 2, according to TechCrunch.

So the company is just an AI company sounding the alarm about AI and keeping its mouth shut while working hard to fuel the same AI arms race it warns about? Huh?

Given that no one in the AI ​​field seems too keen to simply stop building models they claim to be concerned about, we humans still exist when the proverbial clock strikes midnight. Assuming it does, time will tell.

If so.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *