A standoff between the Department of Defense and artificial intelligence company Anthropic PBC over the past few weeks has focused attention on the government’s use of AI, particularly for surveillance of American citizens.
Anthropic insists its model should not be used for mass surveillance. But the Pentagon rejected the request, requiring contract language that would allow Anthropic’s technology to be used for “any lawful use.” As a result, the Trump administration is looking for other AI partners to allow for broader use.
This moment requires lawmakers to put aside short-term political motives and embrace their role as stewards of a system of checks and balances. This means not making decisions based on who sits in the Oval Office, but establishing guardrails that best serve the country far into the future.
After news of the Pentagon standoff became public, many observers sided with Anthropic, highlighting the privacy risks posed by AI. These concerns sound like “the use of AI technology to actively monitor the personal transactions, bank accounts, and related financial information of millions of Americans without legal process is deeply concerning” and “raises serious questions” about “respect for Americans’ fundamental civil liberties.”
Does this particular warning come from an AI company that shares Anthropic’s concerns, a nonprofit group pushing for ethical limits on AI, or a vocal critic of President Donald Trump and Secretary of Defense Pete Hegseth? No, this statement is based on an oversight letter written two years ago by the Republican leadership of the House Subcommittee on the Weaponization of Government.
The letter was directed at the Biden administration and raised concerns that the IRS is using AI to monitor Americans’ “personal transactions and bank accounts.” At the time, I was serving as Commissioner of the National Tax Agency.
These were important questions then, and they may be even more urgent today. Two years ago, the IRS’s use of AI was in its relative infancy, limited to phone chatbots, enhanced fraud filters to review returns, and faster computer coding.
However, Treasury Secretary Scott Bessent has signaled a broader adoption of AI. Specifically, he noted that through the “smarter IT and AI boom,” the IRS will be able to operate with fewer staff while ensuring that the public pays the taxes owed.
While the plan may prove effective, it underscores the ongoing relevance of the issues House Republicans raised two years ago. Notably, a recent Comptroller’s Office report confirms the IRS’ growing use of AI, but also notes current gaps in governance and internal controls.
The current debate about the Department of Defense’s use of AI for national defense presents different considerations than the use of AI for tax enforcement. But there are surprising similarities in the core fundamental questions. The question is: What guardrails should governments apply to AI when analyzing vast amounts of data about their citizens?
Calls for answers about the government’s use of AI primarily came from Republicans during the Biden administration, but now they are also coming from Democrats. On the bright side, this means both political parties have important questions about AI and privacy. While not very encouraging, the timing of these concerns will depend on who is currently sitting in the White House.
This is a common pattern when questions arise about expanding or contracting the powers of the executive branch. When one party gains power, its opponents tend to denounce broader authority. When leadership changes, those who were vocal critics yesterday become almost silent today.
This silence may simply be the result of partisan coordination. Or it may be a political calculation that the powers currently vested in the president may be curtailed in the future. However, we recommend being careful about such bets when it comes to AI.
Once AI is incorporated throughout government operations, it may be difficult, if not impossible, to eliminate completely. Temporary enforcement priorities can be reversed relatively easily, but AI is different.
AI will rapidly become pervasive and embedded throughout systems, processes, and decision-making. Case in point: The Department of Defense gave its employees a six-month grace period to stop using Anthropic tools, but recently clarified that its use could be extended beyond that period if essential to national security. Bottom line: AI can be difficult to untangle.
This makes the current policy debate even more important and urgent. A recent draft bill on AI introduced by Sen. Marsha Blackburn (R-Tenn.) focuses primarily on consumer protection, innovation, and competition. It also wants the government to ensure it only procures “fair” tools.
Those are important questions. However, the framework does not directly address guardrails for government use of AI. And it hasn’t clarified or refuted what many have long assumed, and which the court appeared to uphold in an Anthropic-favorable ruling last week. Private contractors can place meaningful limits on how the government uses intellectual property and technology, and if challenged, they can refuse the work entirely, usually without consequences.
Whether the AI revolution happens quickly or slowly, it is here now, and its impact will be profound. As Congress debates how to govern this technology, long-term checks and balances deserve bipartisan attention.
Danny Wuerffel has served as IRS Commissioner for two terms, from 2023 to 2025. He is currently the resident director of the Johns Hopkins School of Policy Studies and a distinguished fellow at Duke University’s Center on Policing, where he writes about the intersection of tax and policy.
read more Per all accounts
