President Donald Trump signed an executive order Thursday that aims to prevent states from enforcing their own regulations on artificial intelligence and instead create a “single national framework” for AI.
“This is an executive order directing all aspects of our government to take decisive action to ensure that AI operates within a single national framework in this country, rather than subjecting the industry to state-level regulations that could cripple the industry,” White House aide Will Schaaf said in the Oval Office.
The order could have far-reaching implications for U.S. efforts to control untapped technologies that are already a significant part of the economy and stock market but are in many ways untested.
David Sachs, the White House's chief of cryptocurrencies and AI, said at the signing ceremony that the executive order would force the administration to work with Congress to create a “federal framework” on AI.
“In the meantime, this EO provides administrative tools to push back on the most onerous and excessive state regulations,” Sachs said. In particular, Sachs emphasized that the government has no intention of pushing back on state-level regulations regarding child safety and AI. He later wrote in a social media post that the EO “does not mean the government will challenge every state's AI laws.”
In July, Congress blocked a Republican attempt to block states from regulating AI. Before passing the bill, the U.S. Senate voted almost unanimously to remove a 10-year moratorium on enforcement of national artificial intelligence regulations from President Trump's sweeping domestic policy bill.
Lawmakers also declined to add an AI moratorium to the National Defense Authorization Act despite a proposal by President Trump.
Silicon Valley leaders such as OpenAI CEO Sam Altman argue that navigating a patchwork of state regulations could slow innovation and affect America's competitiveness in the global AI race with China, with economic and national security implications.
Critics worry that the push for deregulation could allow AI companies to avoid liability if their tools harm consumers.
Artificial intelligence is expanding into more areas of American life, from personal communications and relationships to healthcare and law enforcement, and few are already subject to systemic surveillance.
In the absence of broad federal legislation, some states have passed laws to address potentially dangerous and harmful uses of AI, such as creating misleading deepfakes and discrimination in employment.
But the debate over how to regulate AI has caused divisions not only within the industry but also within the conservative movement and the Republican Party.
On the one hand, there are people in the administration like Mr. Sachs and Vice President J.D. Vance who are pushing for a lighter regulatory framework. On the other side, figures such as Florida Gov. Ron DeSantis and former White House chief strategist Steve Bannon have spoken out in favor of state-level regulation, arguing that rapidly advancing technology requires such guardrails.
Brad Carson, president of Americans for Responsible Innovation and leader of the AI regulation advocacy group PAC Public First, said in a statement that the executive order “will likely hit a wall in court.”
Carson added that the order “directly attacks state-passed safeguards that have seen vocal public support over the past year, and there is no alternative at the federal level.”
By contrast, Colin McCune, director of government relations at venture capital firm Andreessen Horowitz, called the order a “very important first step” but called on Congress to fill the regulatory gap.
“The state has an important role to play in responding to harm and protecting people, but Congress alone cannot provide the long-term clarity and direction for the state that only Congress can provide,” he wrote in X.
