Insurers found using AI to deny claims accepted in court

Applications of AI


In some cases Estate of Gene B. Locken v. UnitedHealth Group, Inc.No. 23-CV-3514 (JRT/SGE) (Minnesota), the plaintiffs alleged that the defendant insurance companies used artificial intelligence programs to deny claims without human review. They called for findings on the use of AI by insurance companies. When the insurance company refused, it filed a motion to enforce, and a federal court in Minnesota granted the motion. Although this case deals with health insurance, the principles are broadly applicable to all other types of insurance. Insurers are increasingly using AI to evaluate and even deny claims without human review. It is also used to dispute that the policyholder’s costs are too high. Courts are beginning to allow discovery into how AI is used in claims proceedings. Therefore, requests for documentation regarding AI chat files, usage policies, and AI oversight should become a standard part of all policyholder discovery requests in coverage litigation going forward.

court decision

The plaintiffs sought seven categories of documentation related to the insurer’s use of the AI ​​program nH Predict to evaluate claims for post-acute care. “(A) findings regarding post-acute care policies and procedures; (B) development and use of nH Predict; (C) corporate acquisitions and financial data regarding nH Predict’s economic benefits; (D) internal and government investigations into UHC’s use of artificial intelligence (“AI”); (E) naviHealth employee incentives; (F) oversight of UHC’s use of AI; and (G) Medicare Excluded Notification. (“NOMNC”) Information about employees who issue. The insurance company refused to produce these documents, so the plaintiff sought enforcement.

The court granted part of the request. The court granted the second and sixth claims in full, and dismissed the third and fifth claims. The company partially granted the remaining requests, ordering the production of documents related to employee training on the use of AI, a government investigation (not an internal investigation) into the insurance company’s use of AI, and background checks on only certain employees. Notably, the court found that “plaintiffs are entitled to disclosure of documentation regarding how nH Predict works, its development goals and anticipated benefits, and whether it is designed to replace physician decision-making.”

analysis

This is the second recent ruling in which a federal court has allowed discovery of evidence using AI. In this case, as discussed in a recent article by our colleagues, USA vs Heppnerthe Southern District of New York recently held that AI chat files are unprivileged and discoverable. The court’s main concern in this case was that the AI ​​program’s terms of service allowed the developer to share chat files with third parties, including the government. As a result, the user did not have a reasonable expectation of confidentiality, which is a requirement for maintaining privilege.

As the role of AI in society expands, it is natural that AI will be held accountable in lawsuits. Parties should now regularly seek discovery about the other party’s use of AI, including chat files and policies related to its use. They should also take steps to ensure that sensitive information that they do not want provided in litigation is not entered into the AI ​​platform without appropriate safeguards. In this context, companies should consider how their document retention and deletion policies apply to AI chats to avoid discovery sanctions for failure to retain relevant documents.

Specific to insurance coverage disputes, Rocken This decision confirms that policyholders should seek discovery of an insurer’s use of AI to evaluate and deny claims or dispute coverage costs for policyholders. Despite narrowing down some claims, the court granted broad discovery regarding the role of AI in the claims process. Although this particular case deals with health insurance, its general arguments and observations apply to all types of insurance.

for example, Rocken The court allowed discovery of whether AI was used to “replace physician decision-making.” In litigation involving property or liability insurance, policyholders can seek an investigation into whether an AI program has been used to replace the decision-making authority of an adjuster. Insurers must carefully consider each claim on its merits, and policyholders have a right to know the basis for the insurer’s coverage decisions. If an insurance company incorrectly and unfairly denies a claim based on AI output without any or minimal human review, that could be evidence of bad faith in the claims process or the decision itself. Using a software program to eliminate human decision-making can potentially violate its rules if the AI ​​program makes an error. For example, if a disavowal is based on an AI hallucination that was not perceived by a human, policyholders could use it to argue that the insurer acted in bad faith. Whether decisions are made by humans, AI, or both, they must be rational and supported by facts and policy conditions. Therefore, discoveries that are based in whole or in part on AI are important.

conclusion

Policyholders and litigants in general need to consider the role of AI in litigation. It is important for companies to be aware that the information input into AI programs and the output of AI may ultimately be subject to disclosure. In particular, you should not enter privileged information into AI programs that share user data. Otherwise, companies risk waiving privileges. On the other side of the coin, litigants must routinely request the opposing party’s AI chat files, discovery usage policies, and related documents, and depose company employees and Rule 30(b)(6) witnesses regarding the use of AI in related matters.

In the insurance context, policyholders should investigate what role AI has played in reviewing and denying insurance claims. An insurance company using AI to deny a claim or claim that certain expenses are not covered could be evidence of bad faith if the denial was false and a human did not verify its reasonableness.



Source link