President Donald Trump has directed his administration to begin work on establishing a national artificial intelligence (AI) regulatory framework to override cumbersome state-level legal frameworks.
The latest Executive Order (EO) issued from the Oval Office, Securing a national policy framework for artificial intelligencebased on the January 2025 order titled . Removing Barriers to American Leadership in Artificial IntelligenceIn it, President Trump blasted his predecessor, Joe Biden, for trying to cripple the industry through regulation.
President Trump claimed his administration has since delivered “tremendous benefits,” leading to trillions of dollars of investment in AI projects across the country.
In the upcoming presidential election, President Trump said that to win, American artificial intelligence companies must be allowed to innovate without excessive regulation, but that they are being held back by “excessive” regulation at the state level. This creates a patchwork of 50 different regulatory regimes, making compliance much more difficult, especially for start-ups, he said.
Trump also cited Colorado's law banning algorithmic discrimination and criticized some states for enacting laws that require actors to incorporate “ideological bias” into their AI models. The president argued that this could force AI models to generate erroneous results in order to avoid “discriminatory treatment or effects against protected groups.”
“My administration must work with Congress to ensure a national standard that minimizes the burden, rather than 50 mismatched state standards,” he wrote.
“The resulting framework must prohibit state laws that conflict with the policies set forth in this order. The framework must also ensure that children are protected, censorship is thwarted, copyrights are respected, and communities are protected. A carefully designed national framework can ensure that America wins the AI race, as we must.”
special committee
The order, on the basis that it is U.S. policy to “maintain and enhance” global AI dominance through “a minimally burdensome national policy framework,” directs U.S. Attorney General Pam Bondi to establish an AI Litigation Task Force within the next month to challenge state AI laws that the administration deems inconsistent with the EO on a variety of grounds. For example, laws that “unconstitutionally regulate interstate commerce” or laws that Bondi himself has determined are simply illegal.
The EO also requires Secretary of Commerce Howard Lutnick, in consultation with various other stakeholders, to publish within 90 days an assessment of existing state AI laws that are inconsistent with broader policy or legislation that could be referred to a special committee.
At a minimum, this assessment is designed to identify those that require AI models to alter their truthful output or force developers or adopters to process information in an unconstitutional manner, especially with respect to the First Amendment, which covers free speech.
The EO has various provisions restricting certain federal funding to states with restrictive AI laws, particularly related to broadband deployment, as well as requiring agencies such as the Federal Communications Commission (FCC) and Federal Trade Commission (FTC) to provide truthful output information. It proposes a bill that would direct the review of national reporting and disclosure standards that could preempt conflicting laws in any area and create a uniform federal AI policy that would preempt conflicting state laws, with some exceptions in areas such as child safety and AI. Computing and data center infrastructure, national procurement and use of AI.
Kevin Kirkwood, chief information security officer at cybersecurity firm Exabeam, said the central idea of establishing a federal framework that preempts state laws is not necessarily without merit, no matter what Trump chooses.
“Writing a single vision in an executive order won't really force the decentralized ecosystem to align with a single vision, but let's not confuse tactics with principles,” he said. “The underlying argument is sound: AI regulation should be national in scope, not a patchwork of state legislatures that don't even agree on what constitutes an algorithm.
“Artificial intelligence is a national and global infrastructure layer. Allowing 50 states to enact inconsistent and siled laws on how AI is developed, deployed, and audited creates friction, uncertainty, and a tremendous burden of compliance. Whether it comes from Congress or an executive order, a unified federal framework is essential for America to remain competitive, united, and capable of setting global standards.”
Kirkwood acknowledged the argument that federal preemption undermines local control, but said that when it comes to AI, local control would lead to piecemeal standards that benefit no one “except maybe lawyers.”
“California may want aggressive AI safety regulations, but if New York and Florida don't agree, developers will be navigating a maze of contradictory rules,” he said. “Such a patchwork of regulations doesn't protect people and stifles innovation. It's not hard to imagine a future where startups build to the least regulated states and geofence others. It's a race to the bottom disguised as consumer protection.”
Am I missing the point?
But Ryan McCurdy, vice president of marketing at database change governance platform Liquibase, acknowledged that federal coordination on AI is a good idea, but said the EO misses the point.
“A single rulebook will mean nothing unless it addresses the fundamental problem behind every AI failure: the lack of governance over the data structures that feed these models,” he said. “Model-level rules cannot protect the public if the underlying data is inconsistent, adrift, or untraceable.
“So the real question is whether the national standard requires evidence,” McCurdy said. “Evidence of how the model is trained, evidence of how the data evolves, evidence of how the organization prevents unauthorized or dangerous changes. That's the difference between real oversight and a press release.
“If the United States wants to take the lead in AI, we need more than just a unified rulebook,” he said. “We need standards that fundamentally force AI systems to be explainable, manageable, and accountable.”
