Regulation of AI in Preauthorization and Claims Review: An Overview of Federal and State Consumer Protections

Applications of AI


Federal regulation and oversight of AI

Although there are few federal standards that specifically apply to the use of AI in the prior authorization and claims review process, all insurance decision-making for both public and private insurance includes general standards aimed at ensuring that reviews are fair, substantive, and timely. These standards are fragmented across federal agencies, which have separate oversight responsibilities for different health insurance markets.

For private, employer-sponsored plans, the federal government, through the U.S. Department of Labor (DOL), oversees claims and appeals procedural requirements under the Employee Retirement Income Security Act (ERISA). ERISA generally exempts self-insured plans established by private employers from most state insurance laws, including claims review protections, and will likely preempt state AI laws related to the claims review process. Because most workers with employer-provided insurance have self-funded insurance, many consumers are not guaranteed state protections related to the use of AI in claims adjudication, if any.

These ERISA claims and appeals rules became the basis for the Affordable Care Act reforms that applied to all private health insurance plans. These reforms established a federal floor of protection for internal claims and appeals procedures for consumers with private insurance in the Marketplace and outside the Marketplace, and added the option for all privately insured consumers to appeal denied claims through an “external review” by an organization independent of the plan.

ERISA requires all employer plan sponsors tocomplete and fairWhat “full and fair” means in the context of the use of AI tools in claims procedures is yet to be interpreted through guidance and updated regulations. ERISA also includes “fiduciary” rules that require employers and other fiduciaries to act in the best interests of plan participants and monitor vendor activities. Although these standards may provide some protection to employees in connection with an employer plan’s use of AI, in practice the fiduciary standard rarely applies to employer health plans, and to date, enrollees have not successfully brought suit against employers for breaches of fiduciary duty related to sponsored health plans.

Still, the DOL’s recent lawsuit against a large TPA alleged fiduciary breach and violation of ERISA billing rules because the TPA automatically denied bulk claims based on the terms of the plan without individually evaluating each individual’s medical needs. Although these claims did not necessarily involve AI, TPA allegedly used an automated process to issue denials without human review. The case was resolved by establishing a fund to compensate registrants for wrongly denied claims.

There is limited federal guidance specific to the use of AI in prior authorization and claims review in Medicare and Medicaid.. Both programs have their own claims and consumer protection claims based on federal requirements (some state standards also apply to Medicaid).

Medicare. The 2023 Medicare Advantage Regulations and 2024 Additional Guidance make clear that Medicare Advantage organizations cannot determine medical necessity using algorithms or software that do not consider individual circumstances. Denials based on medical necessity must be reviewed by a medical professional. Proposed 2024 regulations addressing bias and discrimination in the use of AI by Medicare Advantage plans were not finalized by the Trump administration. Additionally, the federal government is testing the use of AI to make preauthorization decisions for certain traditional Medicare services through the Wasted and Inappropriate Services Reduction (WISeR) model and has contracted with an AI technology company to manage this pilot program in six states.

Medicaid. Current Medicaid regulations do not directly address the use of automation in prior authorization. Medicaid managed care regulations require that decisions by managed care organizations (MCOs) to deny services be made by “individuals” with appropriate expertise, but do not explicitly mention the use of AI. Through state-managed care contracts (reviewed and approved by CMS), states can set requirements for plan performance and reporting, such as requiring plans to disclose the use of AI during the prior authorization process. The Medicaid and CHIP Payment and Access Committee (MACPAC) recently released draft recommendations regarding the use of automation in Medicaid prior authorization.

State of AI Consumer Protection in Pre-Authorization and Claims Review

In recent years, some states have advanced laws and regulations aimed at protecting consumers from harms that can result from algorithmic decision-making systems, such as privacy violations, inaccuracy, and bias. AI-related bills continue to be debated in nearly every state legislature, and some initiatives are garnering bipartisan support. Some states have issued regulations and other guidance based on existing law in lieu of or in lieu of new state law.

State law provides new and existing AI consumer protections. Some state laws contain broad protections that span different sectors of the economy and apply to a wide range of actors, including developers and those who deploy or use technology for business purposes. Other state laws are specific to industry sectors (e.g., health care), themes (e.g., employment, civil rights, education), or uses, such as utilization review in health insurance.

Extensive state laws include laws prohibiting unfair or deceptive acts and practices. All 50 states have extensive consumer protection laws that prohibit unfair or deceptive acts and practices. These laws are enforced by state attorneys general and, in some cases, allow consumers to sue directly for violations of the laws, rather than being enforced solely by the state (a “private right of action”). Colorado and Utah are examples of states that have amended their consumer protection laws to provide general AI consumer protections.

Depending on the specific state law, these broader consumer protection laws may be used to address consumer harms resulting from the use of AI in prior authorization and claims review. Additionally, an increasing number of states are updating long-standing state health insurance standards for managed care related to utilization reviews and clarifying how these rules apply to AI (Figure 1). Almost all legislation focuses on the decision-making process of utilization review, which in some cases is defined under state regulations as an individualized determination of whether a particular service is medically necessary based on a patient’s individual clinical circumstances. These laws do not necessarily include administrative claim review decisions that do not include a determination of medical necessity, such as whether the claim is for plan-excluded care.

AI and Prior Authorization and Claims Review State Laws Effective April 28, 2026 (Choropleth Map)

Each state law has its own requirements related to the use of AI in prior authorization and claims review, but the main themes are:

  • Claim denials require human review. Some state laws include provisions that only licensed health care providers can make adverse decisions (denials) and that AI cannot be used as the sole decision maker. For example, Illinois law requires that only “clinical personnel” make adverse decisions based on medical necessity and does not permit the sole use of “algorithmic automated processes” to make these decisions.
  • AI tools must consider individual clinical situations. Some of these states require that the AI ​​tools used for utilization reviews be based on the enrollee’s unique medical history. For example, Alabama requires insurance companies that use artificial intelligence to make preauthorization decisions to ensure that those decisions are based on the enrollee’s medical history and clinical circumstances.
  • Disclosure of AI Use. Some of these states, such as Utah, require organizations that use AI to conduct usage reviews to disclose their usage to the public, state health departments, in-network providers, and individual enrollees.
  • A review of the achievements of AI tools. Some state laws also require usage review organizations to periodically review the performance and results of the AI ​​tools they use to check for accuracy and reliability. California law requires AI tools to be regularly evaluated and revised to ensure maximum accuracy and reliability.
  • Limit the use of patient data to protect privacy. Some of these state laws contain language that prohibits those conducting utilization reviews from using patient data beyond the intended purpose and in violation of HIPAA or state law confidentiality protections. Maryland law is one example.
  • AI tools should include and be able to inspect the underlying algorithms. Some of these laws require that AI tools used to review usage be open to regulatory audit. In Texas, the Secretary is authorized to audit and examine a utilization review agent’s use of automated decision-making systems for utilization review at any time.
  • AI protects against bias and discrimination. Some state laws, such as Washington state, require AI tools to be applied “fairly and equitably” and cannot directly or indirectly cause discrimination against enrollees.

New state guidance aims to exercise states’ power to regulate the use of AI. Some states have issued guidance to clarify how existing state legal protections apply to AI. For example, in 2024, the Massachusetts Attorney General issued a public advisory explaining how the state’s existing consumer protection, civil rights, and data privacy laws apply to developers, suppliers, and users of AI, and how they may impact Massachusetts consumers.

Insurance regulators in some other states have taken a similar approach, issuing new guidance to clarify how existing state laws apply to AI and provide insurers with more specific information about their obligations regarding the use of AI. As of early April 2026, at least 25 states have issued guidance based on the model bulletin adopted by the National Association of Insurance Commissioners (NAIC) in 2023. The model bulletin applies to all types of state-regulated insurance (not just health insurance) and addresses the use of AI across all aspects of the insurance life cycle, including claims management and payment, fraud detection, product development, and rating and pricing. This establishes an expectation that consumer decisions made by or supported by AI systems comply with existing insurance laws and regulations, including protections from unfair trade practices and unlawful discrimination. It also directs insurers to adopt policies and procedures detailing how AI will be used and put controls in place to reduce the risk of adverse outcomes. The law provides that insurance supervision includes the ability of regulators to investigate the development, deployment, use, and results of AI systems and predictive models used by insurance companies and their third-party vendors, as well as the ability to request information regarding system validation, testing, and ongoing auditing of AI systems.



Source link