Automated systems are playing an increasingly important role in shaping access to health services. Insurers often use algorithms and artificial intelligence (AI) to route requests, determine coverage, fill out records and forms, and even make recommendations about medical necessity. Automation provides speed, but relying too much on AI systems creates risks such as inappropriate deniability, biased decision-making, and lack of independent clinical review. After all, computers and algorithms too often replace, rather than supplement, clinicians' judgments and recommendations about needed care.
A recent national survey of health insurance companies found that most are already using automated AI systems for prior authorization (PA) requests. Across the individual and group markets, roughly 3 in 4 plans report using AI to approve PAs, which can help reduce delays. However, a small but significant percentage (approximately 8-12%) uses AI to support PA rejection. These automatic denials put patients' access to care most at risk.
The government's recent executive order on AI, “Securing a National Policy Framework for Artificial Intelligence,'' aims to limit the ability of states to enact and enforce their own AI safeguards. Earlier this year, Congress rejected a bill that would have limited states' ability to regulate AI. The EO directs the Department of Justice to identify and challenge state laws deemed inconsistent with federal AI policy, which has not yet been determined. It also encourages the Justice Department to target state laws it deems “onerous or excessive.” Although not specific to health policy, the EO is aimed squarely at state laws regulating AI systems and automated decision-making in health care. Although the threat of Justice Department involvement is a real pressure, the EO cannot preempt state law in this way.
In the absence of an enforceable federal AI regulatory framework, states are increasingly filling the gap by passing their own AI legislation. Two common types include AI-specific laws that cover high-risk uses of AI and laws that limit how pre-authorization decisions can be made. AI-specific state laws often create new nondiscrimination protections (one of the goals the EO sets for the Department of Justice). Others clarify obligations and provide for enforcement of existing laws, such as establishing transparency obligations and ensuring that consumer protections apply to the use of AI in high-risk areas (often defined as including healthcare and health insurance). for example, colorado's landmark legislation, the Consumer Protection in Interaction with Artificial Intelligence Systems Act, applies to AI used in health care decisions, including utilization decisions. * The law guarantees bias protections, requires plans to disclose critical data and methodologies, and guarantees individuals the right to challenge medical decisions generated by AI.
PA-specific laws often require clinician review of automated decisions, prohibit fully automated denials, and require public reporting on approval and denial patterns and processes. Examples include: TExuspassed a law in 2025 that prohibits utilization reviewers from using automated decision systems to make adverse decisions without human oversight. arizona and maryland It adopted similar legislation that prohibits the use of AI as the sole basis for denying medical necessity.
The EO threatens to weaken enforcement of these protections and push states toward less meaningful reforms that are easier for payers to avoid. If states do not advocate for patients, automation could harm people who already lack protection against opaque algorithmic systems, difficult-to-challenge deniability, and processes protected as proprietary trade secrets. This is not a red state or blue state issue. Governments should force each country to take measures to protect consumers from damage caused by AI.
The Trump administration is also pushing for greater use of AI in the medical field. The Centers for Medicare and Medicaid Services (CMS) launched WISeR (Wasteful and Inappropriate Services Reduction) through its Center for Innovation. This is a pilot program that applies AI under prior authorization to select traditional Medicare items and services and is being tested in six states. HHS hopes to set a precedent with this program, scheduled to begin on January 1, and expand AI to more HHS programs. HHS recently released a revised AI strategy. Secretary Kennedy suggested that this is a “template for the use of AI” across the federal government, demonstrating HHS's commitment to being “fully committed” to AI. This week, HHS also released a request for information “soliciting public input on how HHS can accelerate the implementation and use of artificial intelligence as part of the clinical care of all Americans.” If AI-powered preclearance were standardized in these areas and vendors were able to designate it as a “federally recognized new standard,” it would be much harder for states to regulate.
Health care provider groups, including the American Medical Association, have strongly criticized WISeR. Both the House and Senate have introduced companion bills to block its progress. While it is encouraging that the Legislature is highlighting the risks of AI in Pennsylvania, the bill is unlikely to gain support this session.
As we enter the new year, conflicts over AI-driven PA (along with other healthcare utilization controls) will continue to play out at both the federal and state levels. There are several developments worth noting.
- Details regarding the implementation of WISeR and emerging patterns in denials or appeals will be critical as stakeholders assess its impact.
- Whether the SMARTER Care Act gains traction in Congress will indicate how seriously lawmakers take concerns about AI in health care.
- States may consider revoking AI guardrails and prior authorizations that apply to health insurance companies (in general) due to fear of preemption or legal challenges.
- States that have already taken steps to regulate AI and PA could become test cases for whether they can design safeguards to survive federal challenges under the EO.
Stay tuned to the 2026 NHeLP Pre-Approval Series for more updates.
*Enforcement has been delayed by six months to June 2026 by Colorado's special legislative session.
