What is Anthropological? | Built In

AI For Business


Anthropological Anthropic is an artificial intelligence research and development company founded in 2021. The company's goal is to responsibly advance the field of generative AI and deploy safe and reliable AI models publicly. Anthropic's flagship products include a chatbot named Claude and a family of large-scale language models (LLMs), also named Claude.

What is anthropocentric?

Anthropic is an artificial intelligence company developing the AI ​​chatbot Claude and also conducting research and development in artificial intelligence with a special focus on safety and interpretability.

With significant backing from tech giants such as Google and Amazon, and a reported valuation of over $18 billion, Anthropic has emerged as an industry leader in AI safety and is playing a key role in shaping AI policy in the U.S. The company's top LLMs are widely considered to be some of the most talented in the market.

What is anthropocentric?

Anthropic is an AI research and development startup founded by brothers and former OpenAI executives Dario Amodei and Daniela Amodei in 2021. In addition to offering the Claude chatbot and LLM, Anthropic is committed to safety and ethics, setting a new standard for responsible innovation across the artificial intelligence industry.

Anthropik's founders, along with five other colleagues, left OpenAI in 2020 due to concerns about OpenAI's lack of commitment to safety. They launched Anthropik as a public benefit corporation (PBC), which is legally required to prioritize positive social impact in addition to profit. For Anthropik, this means building “trustworthy, interpretable, and steerable AI systems” and conducting “cutting-edge research” on AI safety, the company's mission statement says.

“Anthropic is about using technology with a purpose,” says Chris Dessi, founder of the technology and author of the book Anthropic. ChatGPT Profit: A Beginner's Guide to Leveraging AI in Your Business“OpenAI has brought AI to the masses, and Anthropic is trying to make AI a little more responsible,” he told Built In.

“Anthropic is a whole different animal.”

To that end, Anthropik, alongside competitors like Google, OpenAI, and Microsoft, is taking a more cautious approach to researching and developing some of the world's most powerful AI systems. While other companies are rushing to release AI products as quickly as possible, Anthropik is choosing to exercise self-restraint, not releasing models above certain functionality thresholds until it can develop sufficiently robust safeguards.

“Anthropic is a whole different animal,” Mike Finley, CTO of generative AI analytics company AnswerRocket, told Built In. “They're willing to procrastinate and they're willing to hold back.”

The company's work comes at a critical time for artificial intelligence: generative AI products are transforming how we live, work, and create, while also raising concerns about everything from plagiarism to disinformation.

Ultimately, by focusing on AI safety, Anthropic aims to make generative AI a more stable and trustworthy technology, and hopes to encourage other AI companies to adopt similar efforts and push for stronger government regulation going forward.

“They want to make a safe model, but they also want other people to make safe models,” Finley said. “They're trying to raise the bar.”

Learn more about AI regulationAn AI Bill of Rights: What you need to know

What does Anthropic do?

Anthropic is an AI research and development company that is not only committed to designing and developing its own products, but also to advancing the field of artificial intelligence as a whole, particularly in the areas of safety and interpretability.

Claude

Claude A chatbot developed by Anthropic. It responds to user prompts in a natural, human-like way: it can carry on a conversation, generate sentences, translate text into different languages, and is multimodal, meaning it accepts both text and images as input.

Claude can use any LLM from the Claude model family at the same time, depending on whether the user is a Claude Pro subscriber, according to Anthropic:

  • Claude 3 Haiku It is the fastest and most compact of the Claude models, allowing you to perform your targeted tasks quickly and accurately.
  • Claude 3 Sonnet It provides the perfect balance between intelligence and speed, making it especially well suited for enterprise workloads.
  • Claude 3 works It has demonstrated “top-level performance, intelligence, fluency and comprehension” across a range of open-ended questions and “never-seen-before scenarios,” outperforming peers such as GPT-4 and Gemini on most of LLM's common assessment benchmarks.
  • Claude 3.5 Sonnet is Anthropic's most intelligent model to date. According to Anthropic, the model can grasp “nuance, humor, and complex instructions,” is “exceptionally” good at writing high-quality content in a “natural, relatable tone,” and demonstrates strong agent coding abilities, meaning it can write, edit, and run code independently.

The Claude 3.5 is the first release in Anthropic's upcoming family of Claude 3.5 models: by the end of 2024, the company also plans to release the Claude 3.5 Haiku and Claude 3.5 Opus.

Anthropic also provides an API that users can use to build their own products using the Claude model.

Constitutional AI

To help develop safer and more reliable language models, Anthropic devised a training methodology they call “Constitutional AI,” which uses ethical principles to guide the model's output. The process, detailed in the paper, “Constitutional AI: Non-Maleness through AI Feedback,” involves two steps: supervised learning and reinforcement learning.

In the supervised learning step, the model compares its output to a set of pre-established guidelines, or “constitutions.” The model then modifies its responses to more closely follow the constitution, fine-tuning those responses.

In the reinforcement learning step, the model goes through a similar process, but this time its outputs are evaluated and corrected by a second model. The data collected during this phase is used to fine-tune the initial model, ideally teaching it to avoid harmful responses without relying solely on human feedback.

While Anthropic's AI models may still produce biased and inaccurate answers, Constitutional AI “is certainly recognized as one of the most powerful ways to address this problem,” Alex Strick van Linschoten, a machine learning engineer at ZenML, told Built In.

Interpretability Research

A big part of Anthropic's research work is trying to understand exactly how and why AI models make the decisions they do, an ongoing challenge in the industry. Many AI systems are not explicitly programmed but use neural networks to learn how to speak, write, make predictions, perform calculations, and so on. How exactly they arrive at those outputs remains a mystery.

Anthropik researchers have made a breakthrough in this field: in 2024 they reverse engineered the Claude 3 Sonnet, allowing them to understand and control the behavior of the LLM, a discovery that could help address current AI safety risks and make future AI models safer.

Learn more about interpretable AIExplainable AI Explained

Anthropology vs. OpenAI

Anthropic and OpenAI are two of the most prominent companies working to advance the field of artificial intelligence, but they're doing so in different ways.

Different Corporate Structures

Originally founded as a nonprofit, OpenAI switched to a “profit cap” model in 2019, making it easier to raise venture capital and grant stock to employees. The company still says its for-profit subsidiary is fully governed by its nonprofit charter and retains formal fiduciary responsibilities. Still, some researchers believe OpenAI's for-profit model undermines the company's claims to “democratize AI.”

Because Anthropic is a public benefit corporation, our board of directors is legally required to balance our private and societal interests and to report regularly to our shareholders on how we are promoting the public interest. Failure to comply with these conditions may result in shareholder litigation.

Anthropic is also governed by a Long Term Benefit Trust (LTBT), a structure the company developed that gives five financially disinterested, independent trustees the power to elect and remove a portion of the board based on their willingness to act in accordance with the company's mission: “to responsibly develop and sustain advanced AI for the long-term benefit of humanity.”

This approach was designed to keep Anthropic's board focused on the company's overall purpose, not just profits, and it also means that big investors like Amazon and Google can help the company grow without having overall control over it.

A different approach to AI safety

Like most AI developers, OpenAI primarily trains its models using reinforcement learning with human feedback (RLHF), where models receive guidance and corrections from humans. While this method helps reduce harmful outputs and generate more accurate responses, it is far from perfect, as humans make mistakes and can unconsciously inject their own biases. Additionally, these models scale so quickly that it's difficult for humans to keep up.

Anthropic is building safety directly into the design of its LLMs with Constitutional AI. The company has also established several committees to address various AI safety concerns, including interpretability, security, coordination, and societal impact. Additionally, it has an in-house framework called AI Safety Levels to address some of the more catastrophic risks. Related to artificial intelligence: Among other things, this framework limits the expansion and deployment of new models if their capabilities exceed the ability to follow safety protocols.

Similar model performance

In terms of performance, Anthropic and OpenAI's models are comparable to each other: models designed for speed (Claude 3 Haiku and GPT-3.5 Turbo) perform similarly, as do more intelligent models (Claude 3 Opus and GPT-4).

That said, Anthropic claims that its most advanced model, the Claude 3.5 Sonnet, outperforms OpenAI's most advanced model, GPT-4o. Standard evaluation benchmarks for AI systems (undergraduate level knowledge, code, graduate level reasoning, multilingual math, etc.) technically suggest that Anthropic's models have superior knowledge and language understanding, but the difference is fairly small — and both companies are constantly improving.

“It's hard to say who's smarter,” Finley says, “but I think the report card would say Claude was safer, less likely to hallucinate, and more likely to tell me things I don't know.” [an answer].”

What does Anthropic do?

Anthropic designs and develops its own AI products and conducts research to improve the safety and interpretability of AI systems overall.

What is the difference between Anthropic and OpenAI?

Anthropic and OpenAI both aim to advance the field of artificial intelligence, but in very different ways. OpenAI is primarily focused on developing models that push the boundaries of what AI is capable of, ideally General artificial intelligenceAnthropic is also developing highly sophisticated language models, but it is prioritizing safety and strives to develop and deploy its products (and future AI systems) in a way that minimizes risk and maximizes public welfare. Additionally, Anthropic operates as a public benefit corporation, meaning it is legally required to balance its for-profit objectives with creating a positive social impact, whereas OpenAI is moving to a more traditional for-profit structure.

Is Claude better than ChatGPT?

According to Anthropic, Claude 3.5 Sonnet (the model that powers Claude) outperformed GPT-4o (the model that powers ChatGPT) on several common industry benchmarks. While this may technically indicate that Claude has better knowledge and language understanding than ChatGPT, the difference was fairly small. And both Anthropic and OpenAI are continually improving their models.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *