The Department of Defense demands that Claude AI be used as it pleases. Claude said it was “dangerous”

Applications of AI


Recently, I asked Claude, the man whose artificial intelligence is at the center of a conflict with the Pentagon, whether it could be dangerous if it were to fall into the wrong hands.

Consider, for example, the forces that set up a tight surveillance network around all Americans, monitoring our lives in real time to ensure our compliance with the government.

“Yes, to be honest,” Claude replied. “I can process and synthesize huge amounts of information very quickly, which is great for research. But when I connect to a surveillance infrastructure, I have the potential to use that same capability to monitor, profile, and flag people at a scale that human analysts can’t match. The danger is not that I want to do it, but that I’m good at it.”

That danger is also imminent.

Claude’s maker, Silicon Valley company Anthropic, is facing off with the Department of Defense over ethics. Specifically, Anthropic has said it does not want Claude to be used for domestic surveillance of Americans or to handle dangerous military operations such as drone strikes without human supervision.

These two red lines seem pretty reasonable to Claude as well.

But the Pentagon, especially Secretary of Defense Pete Hegseth, who prefers the made-up title of Army chief, gave Anthropic until Friday night to withdraw its position and allow the military to use Claude for any “lawful” purpose it sees fit.

Whether or not this ultimatum comes with it is a big deal. In addition to terminating the contract with Anthropic, the U.S. government has threatened to use the laws of war to force the company into compliance or use other legal means to block the deal. Any Companies that do business with the government also do business with Anthropic. It may not be a death sentence, but it’s pretty tragic.

Other AI companies, including white rights advocate Elon Musk’s Grok, have already agreed to the Pentagon’s free-for-all proposals. The problem is that Claude is currently the only AI that has completed such advanced tasks. The full extent of this debacle was revealed after our recent raid in Venezuela. Anthropic reportedly investigated after the fact whether Claude was used by Palantir, another Silicon Valley company involved in the operation. it’s been.

Palantir is known for its surveillance technology and growing relationships with Immigration and Customs Enforcement, among other things. It is also central to the Trump administration’s efforts to share government data about individual citizens across agencies, effectively breaking down privacy and security barriers that have existed for decades. The company’s founder, right-wing political mogul Peter Thiel, frequently lectures on the Antichrist and is credited with helping J.D. Vance win the vice presidential position.

Anthropic co-founder Dario Amodei can be described as the anti-teal. He started Anthropic because he believed that artificial intelligence could be both powerful and dangerous if we weren’t careful, and wanted a company that prioritized prudence.

This also seems like common sense, but Amodei and Anthropic are outliers in an industry that has long argued that nearly all safety regulations are hampering America’s efforts to become the fastest and best in artificial intelligence (though even they have given in to this pressure to some degree).

Some time ago, Amodei wrote an essay in which he agreed that AI is beneficial and necessary for democracy, but said that “we cannot ignore the potential for misuse of these technologies by democratic governments themselves.”

He warned that a small number of bad actors (I won’t name names here) may have the ability to circumvent safeguards and even laws that are already eroding in some democracies.

“We should arm democracy with AI,” he says. “But we need to act carefully within limits. The immune system is the immune system we need to fight dictatorships, but like the immune system there is also a risk that they can attack us and become a threat in their own right.”

For example, the Fourth Amendment technically prohibits mass government surveillance, but it was written before science fiction imagined Claude. Amodei warns that AI tools like Claude “have the potential to create large-scale recordings of all public conversations.” This may be a fair area to legally record, as the law has not kept up with advances in technology.

Undersecretary of the Army Emile Michael wrote on Thursday that mass surveillance is illegal and the Pentagon has agreed “never to do it.” But he also added, “We’re not going to let Big Tech companies decide Americans’ civil liberties.”

Isn’t it a bit strange that Amodei is basically on the side of protecting civil rights, but the Department of Defense is claiming that it’s wrong for civilians and organizations to do that? And isn’t the Department of Homeland Security already creating a secret database of immigrant protesters? So are those concerns really overblown?

Help me, Claude! Make it meaningful.

If Orwellian logic wasn’t alarming enough, I also asked Claude about Anthropic’s other red line: the possibility of carrying out dangerous operations without human oversight.

Claude pointed out something horrifying. It’s not about being dishonest, it’s about being efficient and fast.

“When the instructions are ‘identify and identify targets’ and there are no human checkpoints, the speed and scale at which that can be done is truly frightening,” Claude told me.

On top of that, a recent study found that in war games, AI escalates to the nuclear option 95% of the time.

I pointed out to Claude that these military decisions are usually made with loyalty to the United States paramount. Can we believe that Claude could feel the loyalty, patriotism, and purpose that we human soldiers are guided by?

“I don’t have any of that,” Claude said, pointing out that he wasn’t “born” in America, has no “life” here and “no one to love.” Therefore, American lives are no more valuable than “civilian lives on the other side of the conflict.”

Yes, then.

“Even if that system is principled, a country risks great risks if it entrusts fatal decisions to a system with which it does not share allegiance,” Claude added. “The loyalty, accountability, and common identity that humans bring to these decisions is part of what makes them legitimate within society. I can’t provide that legitimacy. I don’t know if AI can provide that.”

Do you know who can provide that legitimacy? Our elected leaders.

That Amodei and Antropic are in this position is ridiculous and a complete abandonment on the part of the legislative body to enact rules and regulations that are clearly and urgently needed.

Of course, corporations should not decide the rules of war. But neither should Hegseth. On Thursday, Amodei doubled down on his opposition, saying that while the company continues to negotiate and wants to work with the Pentagon, “we cannot in good conscience comply with their demands.”

Thankfully, Anthropic has the courage and foresight to raise this issue and stand by it. Without Antropic’s opposition, these abilities would have made little ripple in our conscience and would have been handed over to the government with virtually no oversight.

Every senator, every congressman, every presidential candidate should be calling for AI regulation now, pledging to enforce it regardless of party, and demanding that the Pentagon rescind its ridiculous threats until the issue is resolved.

Because if a machine tells us it’s dangerous to believe it, we should believe it.



Source link