A new AI model for humanity of categorized information is already in use by the US government

Applications of AI


getTyimages-946355660

Goning/Getty

About five weeks before President Donald Trump announced his administration's AI policies, major AI companies continue to strengthen their ties with the government. It is especially for national security customers.

On Thursday, humanity unveiled Claude Gob, a family of “US national security customers only” models. Claude Gov covers everything from operations to intelligence and threat analysis, and is designed to interpret categorized documents and defense contexts. It also improves language and dialect proficiency, and enhances interpretation of cybersecurity data.

The popular Claude Code AI tool for humans is now included in Pro Plan, which is included in $20 per month.

“The models are already deployed by agencies at the highest level of US national security, and access to these models is limited to those operating in such a classified environment,” the company said.

The company, developed with feedback from government users of Anthropic, said that Claude Gov received the standard for safety testing (we apply to all Claude models). The release ensured that humanity has repeatedly committed to safe and responsible AI, and that the Claude government would do the same.

The announcement comes months after Openai released ChatGPT GoV this January, indicating that the major release of AI Labs will explicitly serve government use cases with tweaked products. The National Guard is already using Google AI to improve disaster response.

(Disclosure: Ziff Davis, the parent company of ZDNET, filed a lawsuit against Openai in April 2025, claiming it infringed Ziff Davis' copyright in training and operating AI systems.)

Before the Trump administration, military and other defense-related contracts between AI companies and the US government were less publicized, especially amid changes in usage guidelines for companies like Openai, which vowed not to engage in the creation of weapons. Google has recently revised its guidelines despite maintaining its claims against responsible AI.

Also the mapped morality of Claude of humanity. This is the value of the chatbot (and not)

The rise in relations between the government and AI companies is coming in the larger context of the Trump administration's imminent AI action plan scheduled for July 19th. Since Trump took office, AI companies have adjusted their Biden-era responsibility commitment, originally created by the American Institute of AI Safety. Openai advocates for less regulations in exchange for government access to the model. Alongside anthropology, we are inhabited with government in new ways, including scientific partnerships and the $500 billion Stargate initiative.

Get the top stories of the morning in your inbox every day Tech Today newsletter.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *