Humanity announced Thursday the Claude Gov, a product specifically designed for the US defense and intelligence reporting agencies. The AI model has loose guardrails for government use and is trained to better analyze classified information.
The models, which the company has announced “already deployed by agencies at the highest level of US national security,” said access to these models is limited to government agencies that process sensitive information. The company did not confirm how much they were using it.
The Claude GoV model is specifically designed to handle government needs independently, such as threat assessment and intelligence analysis, with each human blog post. The company said it “had the same rigorous safety tests as all of our Claude models,” but said the model had specific specifications for national security activities. For example, they can “reject less when engaged in the classified information.”
Claude Gov's model, according to humanity, has a deeper understanding of documents and contexts in defense and intelligence, with improved proficiency in language and dialects related to national security.
The use of AI by government agencies has long been scrutinized for its potential harm and ripple effects for minority and vulnerable communities. There has been a long list of illegal arrests in multiple US as police documented evidence of facial recognition use, bias in predictive policing, and evidence of discrimination in government algorithms assessing welfare assistance. For years, there has been industry-wide controversy over large tech companies such as Microsoft, Google, Amazon, and more, allowing the military to be used, especially in Israel.
The Human Use Policy specifically stipulates that users “must not “create or promote or promote the exchange of illegal or highly regulated weapons or goods” using human products or services to “produce, modify, design, design, sell or distribute weapons, design, design, design, sell or other systems designed to cause human loss.”
At least 11 months ago, the company said it had created a set of contractual exceptions to its usage policy “carefully adjusted to enable beneficial uses by carefully selected government agencies.” Certain restrictions such as disinformation campaigns, the design or use of weapons, the construction of censorship systems, and malicious cyber operations remain prohibited. However, while humanity can decide to “adjust the restrictions to the mission and legal authorities of government agencies,” it is intended to enable effective use of our products and services and reduce potential harm.
Claude Gov is humanity's answer to the ChatGpt Gov, an Openai product for US government agencies that was launched in January. It is also part of a similar, broader trend among the AI Giants and startups looking to strengthen their business with government agencies, particularly in an uncertain regulatory environment.
When Openai announced ChatGPT GoV, the company said within the past year, more than 90,000 employees from federal, state and local governments have used the technology, including translating documents, generating summary, creating policy notes, writing code and creating applications. Although humanity refused to share the same type of numbers or use cases, the company is part of SAAS's Palantir's Fedstart program for businesses that want to deploy federal orientation software.
Scale AI is an AI giant providing training data to industry leaders such as Openai, Google, Microsoft and Meta, and signed a contract with the Department of Defense in March for its first AI agent program for the US military program. Since then, it has expanded its business to the World Government and recently signed a five-year contract with Qatar to provide automated tools for civil servants, healthcare, transportation and more.
