Over the past decade, health insurance companies have increasingly embraced the use of artificial intelligence algorithms. Unlike physicians and hospitals who use AI to help diagnose and treat patients, health insurance companies use these algorithms to determine whether to pay for health care treatments and services recommended by a particular patient's physician.
One of the most common examples is advance permission. This is when a doctor must receive payment approval from the insurance company before providing care. Many insurers use algorithms to determine whether the requested care is “medically necessary” and should be covered.
These AI systems also help insurers to determine how well they qualify their patients. For example, the day a patient can take several days after surgery.
If your insurance company refuses to pay for the treatment recommended by your doctor, there are usually three options. You can appeal to a decision, but the process requires a lot of time, money and expert help. One in 500 people will be appealed for rejection of claim. You can agree to other treatments covered by the insurance company. Alternatively, you can pay for the treatment you recommend yourself. This is often unrealistic due to high medical costs.
As a legal scholar studying health laws and insurance, I am concerned about how insurance algorithms will affect people's health. Like the AI algorithms used by doctors and hospitals, these tools can improve care and reduce costs. Insurance companies say AI can help you make quick and safe decisions about what care it needs and avoid unnecessary or harmful treatments.
However, there is strong evidence that the opposition may be true. These systems are sometimes used to delay or reject care that needs to be covered, with the name of saving everything.
Withholding care patterns
Perhaps the company will supply patient healthcare records and other relevant information to healthcare compensation algorithms and compare that information with current healthcare standards to determine whether to cover patient claims. However, insurance companies refuse to disclose how these algorithms work in making such decisions, so it is impossible to say exactly how they actually work.
Using AI to check coverage means that when insurance companies save time and resources, it means that there are fewer medical professionals needed to check each case. However, the financial benefits for insurance companies are not halting. If the AI system quickly rejects valid claims and the patient appeals, the appeal process can take years. If a patient is seriously ill and is expected to die soon, the insurance company may save money simply by dragging the process out of the process, hoping that the patient will die before the case is resolved.
This could allow insurers to use algorithms to withhold care for expensive, long-term, or terminal health issues, such as chronic or other debilitating disorders. “Many older people who paid Medicare are faced with amputation or cancer and are forced to either pay for care on their own or go without it,” one reporter said.
Research supports this concern – patients with chronic illnesses are more likely to be denied compensation and suffer from the consequences. Additionally, black and Hispanic people, as well as other non-white ethnic groups, as well as those identified as lesbian, gay, bisexual or transgender, are more likely to experience rejection of claims. Also, some evidence suggests that advance approval may increase rather than reducing healthcare system costs.
Insurance companies claim that patients can always pay for their treatment, so they are not actually denied care. However, this argument ignores reality. These decisions have serious health consequences, especially when people cannot afford the care they need.
Move towards regulations
Unlike medical algorithms, insurance AI tools are rarely regulated. They don't need to experience Food and Drug Administration reviews, and insurance companies often say their algorithms are trade secrets.
That is, there is no public information on how these tools make decisions, and there is no external test to ensure that they are safe, fair or effective. There is no peer-reviewed research to show how well it actually works in the real world.
It seems there is momentum for change. The Centers for Medicare and Medicaid Services (CMS), the federal agency responsible for Medicare and Medicaid, recently announced that Medicare Advantage Plan insurance companies must make decisions based on the needs of individual patients, not just general standards. However, these rules still allow insurers to create their own decision-making criteria and do not require external testing to prove that the system works before using them. Additionally, federal regulations can only regulate federal public health programs, such as Medicare. They do not apply to private insurance companies that do not provide coverage for federal health programs.
Some states, including Colorado, Georgia, Florida, Maine and Texas, have proposed legislation to curb insurance AI. Some people have passed new laws, including the California state law of 2024, which requires licensed physicians to oversee the use of insurance coverage algorithms.
However, most state laws suffer from the same weaknesses as the new CMS rules. They leave too much control in the hands of the insurer to determine how to define “health needs” and what contexts in which algorithms are used to determine coverage. Also, there is no need to review these algorithms by neutral experts prior to use. And even state laws wouldn't be enough. Because states generally cannot regulate Medicare or insurance companies operating outside the border.
The role of the FDA
In the view of many health law experts, the gap between insurance companies' behaviour and patient needs is so wide that it is essential to regulate the algorithms in the health care scope. As I claim to be featured in the Indiana Law Journal in my essay, the FDA is good to do so.
The FDA is staffed with healthcare professionals who are capable of assessing insurance algorithms before using them. Agents have already reviewed many medical AI tools for safety and efficacy. The FDA's oversight also provides a unified national regulatory scheme, rather than a patchwork of national regulations.
Some argue that the power of the FDA here is limited. For the purposes of FDA regulations, medical devices are defined as devices intended for use in the diagnosis of illness or other conditions or in the prevention of a disease. As health insurance algorithms are not used to diagnose, treat or prevent disease, Congress may need to modify the definition of medical devices before the FDA regulates these algorithms.
If the current powers of the FDA are not sufficient to cover insurance algorithms, Congress can change the law to give it the power. Meanwhile, CMS and state governments may need independent testing of these algorithms for safety, accuracy and fairness. It could also encourage insurers to support a single national standard, such as FDA regulations, rather than face a patchwork of rules across the country.
The move to regulate how health insurance companies use AI in determining coverage has clearly begun, but it is still waiting for a robust push. The patient's life is literally on the line.
![]()
Jennifer D. Oliva is currently receiving funding from NIDA to investigate the impact of the drug industry's message on the opioid crisis for US military veterans. She is a member of the University of California School of Law, San Francisco Law, Science & Health Policy, and Georgetown University Law Center O'Neill Institute for National & Global Health Law.
/Commentary of the conversation. This material of the Organization of Origin/Author is a point-in-time nature and may be edited for clarity, style and length. Mirage.news does not take any institutional position or aspect, and all views, positions and conclusions expressed here are the authors alone.
