Michigan’s use of AI to process SNAP applications raises concerns about past automation failures

Applications of AI


This story was originally published by Michigan Advance.

The Michigan Department of Health and Human Services has begun using artificial intelligence to increase the number of Supplemental Nutrition Assistance Program cases it can consider, department officials told members of the Senate DHHS Appropriations subcommittee last week.

Discussing efforts to comply with new federal requirements, the department’s chief operating officer, David Knezek, said the department has introduced an AI case reading tool to help employees go line-by-line through cases to ensure the department is making accurate decisions about payments before any money is disbursed.

Under HR 1, also known as the “One Big Beautiful Bill Act,” states would be required to pay a portion of their SNAP benefits based on their payment error rate, or how accurately the state determines eligibility and benefits for households participating in the program. In analyzing the change, the nonpartisan Brookings Institution noted that the measure does not consider unfairly denying applicants an error and that the rate is not a measure of fraud.

Knezek told committee members that the department can only manually review a relatively small number of cases.

“With this AI case reading tool, we will not only be able to scan all cases in a perfect environment before any money is out, but we will also be able to target the cases that are most likely to experience payment error rates,” said Knezek.

MDHHS Chief Operating Officer David Knezek (right) speaks before the Senate Appropriations Subcommittee on DHHS. March 17, 2026. screenshot

He pointed out that single-person and two-person households had the largest margin of error, and households with a larger number of people had the largest dollar margin of error.

“AI case reading tools allow us to target cases with the highest likelihood of fraud,” Knezek says.

Knezek said the ministry is also deploying an optical character recognition tool to scan documents and enter information such as pay stubs submitted to the ministry to avoid human error on the front line and allow for human verification on the back end.

On Monday, Michigan Advance asked the department when the AI ​​case reading and character recognition tools were introduced, what programs are being used, whether it disclosed to applicants that AI was being used to review cases, and what safeguards are in place.

Time and again, we see these AI systems being deployed without proper testing, leaving recipients as guinea pigs in AI experiments, and this is unacceptable.

– Michele Gilman, venerable law professor at the University of Baltimore School of Law

Two days later, department spokeswoman Erin Stover said the department has been using optical character recognition tools for several years and recently began using AI-assisted case reading to aid in case reviews.

Stover said in an emailed statement that while tools are being used to flag discrepancies, eligibility staff remains responsible for determining all cases.

“AI-assisted case reading capabilities are part of a broader effort to enhance accuracy and prepare for federal policy changes under HR 1 that increase the importance of accurate eligibility determinations,” Stover said.

Stover said the department uses tools approved by the Department of Technology, Management and Budget within a secure system and does not use public-generated AI to process cases.

“Safeguards are in place to protect applicant data, it is only accessible to authorized personnel and treated in accordance with state and federal privacy requirements,” Stover said.

Applicants are also advised that their information may be verified through a data matching and vetting process as part of the eligibility determination, and all applications are subject to review to determine eligibility in line with federal requirements, Stover said.

Stover later told the Michigan Advance that the state’s case leaders are using Google Vertex AI. The company describes it as “a unified, open platform for building, deploying, and extending generative AI, machine learning models, and AI applications.”

New AI tools aim to reduce errors, but raise common concerns

The agency’s decision to incorporate artificial intelligence into case decisions is reminiscent of the state’s 2013 effort to automate the review of unemployment cases through the Michigan Integrated Data Automation System (MiDAS), which led to multiple lawsuits and settlements that awarded repayments and damages to many individuals wrongly accused of fraud.

Undark Magazine reports that more than 40,000 people were prosecuted for misrepresentation in the first two years of the scheme’s introduction, and the agency demanded payments of around five times the benefits.

The Michigan State Auditor’s Office subsequently investigated 22,000 cases marked as fraudulent and determined that 93% did not actually involve fraud.

There are many reasons to be wary and concerned about the Department of Health and Human Services’ use of AI to make SNAP decisions, given the state’s track record of using algorithmic fraud detection systems, Michele Gilman, a law professor at the University of Baltimore School of Law, told the Michigan Advance.

One of the key questions that comes to Gilman’s mind is how well tested and vetted are case reader tools?

“Time and again, we see these AI systems being deployed without proper testing, leaving the recipient as a guinea pig in an AI experiment, and that is unacceptable,” Gilman said.

Michele Gilman, professor of law, director of the Saul Ewing Advocacy Clinic, and co-director of the Center for Applied Feminism at the University of Baltimore School of Law. |University of Baltimore School of Law

One of the challenges with using fraud detection systems, Gilman points out, is that the rate of actual fraud is low, whether it’s for public benefits, banks, or credit cards. As a result, he said, programmers have a hard time programming tools to detect fraud because they don’t have robust data, leading to high rates of false positives and negatives.

According to the Benefits Technology Advocacy Hub, the MiDAS system flags any data discrepancies, no matter how minor, as fraud and requires applicants to follow up within 10 days. The system also averaged the applicant’s entire income rather than looking at individual salaries, leading to discrepancies in system-determined income and increasing fraud determinations.

Given that these systems generate false positives and negatives, having a layer of human review becomes increasingly important, Gilman said. However, these reviewers must have knowledge of the limitations of the AI ​​system to avoid unduly postponing system decisions.

Gilman said technology can play a role in working with officials, but the ultimate responsibility lies with the agency.

“It shouldn’t be a case of ‘the vendor failed’ or ‘the AI ​​went crazy’ so that the real accountability ultimately lies with government officials,” Gilman said.

She pointed to the AI ​​Risk Management Framework released by the National Institute of Standards and Technology. This framework emphasizes extensive human integration at all stages of the AI ​​lifecycle.

Jennifer Lord represented people accused of fraud in a class action lawsuit against the Michigan Employment Insurance Agency. While working on the Bauserman v. Unemployment Insurance Agency case, Lord also advocated for guardrails around the use of AI in government services, but said those efforts have yet to bear fruit.

Gilman says that under former President Joe Biden’s administration, there was a lot of attention on how AI could go wrong. In 2023, Biden issued an executive order putting guardrails on AI development and directing the U.S. Department of Agriculture and Health and Human Services to issue guidelines for the use of AI in programs such as SNAP and Medicaid. The guidelines discussed AI issues that impact civil rights and safety and acknowledged data privacy concerns.

Due process concerns looming over benefit recipients

But Gilman said the Biden administration’s emphasis on fairness, equity and accountability has been ignored, and the Trump administration has put less emphasis on consumer rights and more trust in AI companies.

“There is a belief in AI for cost reduction and efficiency, but that is unwarranted,” says Gilman.

Jennifer Lord | Sterling Employment Law

As a lawyer who represents low-income people receiving public benefits, Gilman said he doesn’t have many hooks to hang his hat on other than the due process rights guaranteed in the U.S. Constitution.

“At some point, you have a constitutional right to human review,” Gilman said, explaining that the problem with the state’s unemployment system was that the only way to see a case through human eyes was to file an appeal and go before an administrative law judge. However, she explained that the system’s decisions cannot be explained, creating a “black box” problem that renders human review meaningless.

Lord noted that programs created to detect fraud are typically overcorrected, raising further concerns about the role program developers play in determining public benefits.

“Right now, we have private companies that are basically writing the regulations and enforcing the laws, and their goal is to ‘save as much money as possible,'” Lord said.

Lord said if states hand over government functions to private entities that design and implement systems without checks and balances, we will be faced with new disasters like the MiDAS system.

Additionally, individuals who rely on public benefits are the ones with the least access to legal aid, Lord said.

“They’re already in a tough financial situation, otherwise they wouldn’t apply for benefits,” Lord said, noting that some individuals may not have a computer or the ability to keep to a tight schedule for difficult administrative decisions.

The Michigan Advance is part of State Newsroom, a nonprofit news network supported by a coalition of grants and donors as a 501c(3) public charity. The Michigan Advance maintains editorial independence. If you have any questions, please contact editor Jon King at info@michiganadvance.com.





Source link