Minnesota uses artificial intelligence to fight AI fraud

Machine Learning


(TNS) — A brazen fraudster and Minnesota authorities working to combat abuse in social services have something in common. That means both rely on artificial intelligence to achieve their goals.

ChatGPT helped criminals create fake customer notes to help a fake company collect millions of dollars in Medicaid reimbursements for non-existent services. State leaders are betting on machine learning to parse thousands of claims from providers in hopes of identifying claims that deviate from procedures.

This situation, using AI to detect AI, reflects a very modern challenge facing Minnesota officials as they scramble to stop a snowballing fraud scandal. They are using a variety of tools to thwart the plan that brought national attention to the state and helped derail Gov. Tim Walz’s bid for a third term.


Prosecutors estimate the total fraud across 14 high-risk Medicaid programs over seven years could be more than $9 billion, but Walz said that number is a guess. Fifteen people have been charged so far with housing and autism program fraud, and more charges are likely as state and federal investigations into social services continue.

As the crisis unfolds, some experts are praising Minnesota for deploying AI to uncover more nefarious uses.

“It’s like the old adage: fight fire with fire,” said Jordan Burris, head of public sector at Socure, which provides AI-powered fraud prevention software to businesses and government agencies.

But some warn that AI-driven algorithms could mistakenly flag standard claims as suspicious, hurting undeserving providers.

“They did a really great job,” said Mona Birjandi, an economist and director of data analysis at the New York law firm Outen & Golden. “But it is not without unintended consequences.”

When two Philadelphia men traveled to Minneapolis to take advantage of the Midwestern state’s generous social services, they turned to artificial intelligence to get the job done.

Anthony Jefferson and Lester Brown used ChatGPT to generate fake emails and notes discussing customers who were allegedly registered with the housing stabilization service company, court records show. The forged documents helped the men steal about $3.5 million from an assistance program they claimed to have provided to 230 Medicaid recipients.

The so-called “fraudulent travelers” pleaded guilty to wire fraud in February and face the first charges in Minnesota involving AI in furtherance of a fraudulent scheme, according to the Justice Department.

More may be in development.

Bureau of Criminal Enforcement Director Drew Evans said in a press conference on February 26 that the agency has noticed an increase in people using artificial intelligence to commit financial crimes, such as using AI-generated voices to impersonate others and steal money.

In Minnesota, people filing charges of Medicaid fraud recently discovered progress records that appeared to have been generated by AI and submitted by mental health providers, according to a spokesperson for the state attorney general’s office.

Socure’s Burris says AI is accelerating bad actors’ attacks and eliminates the need for advanced training in data science. Almost anyone has access to technology that allows them to create official-looking emails or quickly collect large amounts of personal information from the Internet for use in applying for government benefits.

How to beat modern-day scammers?

“Given today’s evolving AI fraud, the only way to get ahead of it is to use AI at scale to counter it,” Barris said.

As part of a comprehensive anti-fraud package announced by Walz on February 26, more resources will be dedicated to leveraging machine learning to identify suspicious claims early. If successful, the proposal would build on the Department of Human Services’ work with Optum, a subsidiary of UnitedHealth Group, which the state used to conduct AI-powered insurance claims reviews.

John Eichten, deputy commissioner of Minnesota IT Services, said Optum used a “collection of analytics” to parse provider claims and find deviations from policy. This includes everything from providers who say they see dozens of customers in a day to those who bill repeatedly for the same amount of time.

Optum found widespread billing fraud in its autism intervention program, one of 14 Medicaid-funded services that officials say are at risk of fraud. But Eichten pointed out that the reported claims are not necessarily fraudulent.

While AI is a useful screening tool, it’s up to social services departments to dig deeper into these initial results to identify fraud, he said. (A spokesperson for the state Department of Human Services said investigators do not use AI in the post-payment review process.)

And there are potential pitfalls in using algorithms. Patients are accusing insurance giant UnitedHealthcare of using a flawed AI program to deny post-acute care coverage to Medicare patients. The insurance company called the allegations “unfounded.”

Economist Birjandi said improperly trained AI-driven fraud detection algorithms risked incorrectly identifying legitimate providers, and there needed to be a clear process to challenge the algorithm’s initial decision. Eichten said the state has been working with Optum to continually improve its analysis to prevent exaggerated flags like those mentioned by Birjandi.

“We want an analysis that will point us in the right direction to investigate what actually represents fraud, waste and abuse,” he said.

Eichten said fighting fraud requires careful, intentional, and permanent use of some of the tools used by bad actors. He added that authorities cannot afford to reject AI tools that are not perfect.

“If you do that, you’re giving hackers and fraudsters a huge competitive advantage.”

©2026 Minnesota Star Tribune, distribution Tribune Content Agency LLC





Source link