Robert F. Kennedy Jr. is an AI expert. Last week, the Secretary of Health and Human Services used the technology during a stop in Nashville on his “Take Back Your Health” tour, between slamming ultra-processed foods and encouraging Americans to eat more protein. “My agency is now leading the federal government in bringing AI into everything we do,” he declared. Kennedy said an army of bots would transform healthcare, eliminate fraud and put virtual doctors in everyone’s pockets.
RFK Jr. has been touting his promise to bring AI to his department for months. “The AI revolution has arrived,” he told Congress in May. The following month, the FDA launched Elsa, a custom AI tool designed to speed drug reviews and aid the agency’s operations. In December, HHS released an “AI Strategy” outlining how it will use technology to modernize the department, support scientific research and advance President Kennedy’s “Make America Healthy Again” campaign. A CDC official showed me a recent email sent to all staff encouraging them to start experimenting with tools like ChatGPT, Gemini, and Claude. (Several health and human services officials we spoke to for this article have agreed to withhold their names so they can speak freely without fear of professional repercussions.)
But the full extent of federal health agencies’ commitment to AI is finally becoming clear. Late last month, HHS released a list of nearly 400 uses for the technology. Viewed at face value, these applications do not seem to amount to an “AI revolution.” The agency uses or develops chatbots to generate social media posts, compile public records requests and write “justifications for personnel actions.” One use of technology the agency points to is simply “AI in Slack,” which refers to workplace communication platforms. The chatbot on RealFood.gov, the new government website that explains President Kennedy’s vision for America’s diet, promises “real answers about real food,” but it just opens xAI’s chatbot, Grok, in a new window. Many applications seem frankly mundane, such as managing electronic medical records, reviewing grants, summarizing vast scientific literature, and extracting insights from messy data. There are multiple IT support bots and AI search tools.
Back-office filing numbers suggest the agency may be relying on AI to replace the thousands of HHS employees who have been laid off or voluntarily bought out over the past year. For example, the database points to “staffing shortages” as the reason the agency’s Civil Rights Division is piloting ChatGPT to identify patterns in court decisions involving Medicaid.
There are many ways this could go wrong. AI tools continue to make unpredictable errors. It’s easy to imagine a tool aimed at eliminating “fraud” that mistakenly terminates someone’s Medicaid, or a tool aimed at helping ICU doctors recommend the wrong medication or dose. In May, the agency published its ground-breaking Make Our Children Healthy Again Report, which suggested the government would use AI to analyze trends in the prevalence of chronic diseases, including autism. The report contained numerous false quotes that appeared to be hallucinations caused by AI, which the White House blamed on formatting errors. HHS subsequently revised the report by removing the incorrect citations and replacing them with new references.
Several HHS officials said the department’s new AI tools do make frequent errors and don’t necessarily fit into existing workflows. Despite the government’s big claims about Elsa, the chatbot is “so bad that it fails half of the tasks it’s asked to do,” an FDA official said. In one instance, an employee asked Elsa to look up the meaning of a three-digit product code in the FDA’s public database. The chatbot spit out a wrong answer. According to the same staffer, an internal website showcasing potential uses for Elsa includes relatively mundane tasks such as creating data visualizations and summarizing emails, but because of the illusion, “most people would rather read the documents themselves.” Another official said they tried to use Elsa to evaluate food safety reports. “It was processed for a while, and when we found out it wasn’t, we said, ‘Yes, we’re fine,'” the employee told us.
Some staff we spoke to had a more positive view. One CDC official said his team is “constantly reporting on efficiencies gained using AI,” even for routine use cases like document summarization. Many of the tools HHS is using appear well-intentioned. For example, a tool used by federal and local health departments allows authorities to analyze grocery store receipts from people suspected of having food poisoning across the country, looking for commonalities in the foods they ate. HHS spokesman Andrew Nixon said in an email that “a small number of disgruntled employees” have had issues with the agency’s AI tools. He said many staff members “reported being more efficient at work.” Nixon added that despite the staffing shortage, the agency is “well equipped to accomplish its mission.”
If anything, Kennedy is following the kind of automation already being applied across healthcare. Medicine has become one of the biggest sources of hype for AI, with many ongoing attempts to streamline the complex medical world and generate life-saving research. As just a few examples, doctors may spend more than a third of their day writing notes, reviewing medical records, and processing insurance claims in electronic medical record systems. If AI products can automate even a small portion of that work, it could mean more time for healthcare workers, who are chronically in short supply in the United States, to spend with patients. HHS, like many hospital networks across the country, is piloting AI tools that can streamline medical records. Startups are working on building all kinds of AI health tools. OpenAI and Anthropic recently launched a healthcare product.
The biggest hopes for AI in medicine are far more flashy: curing cancer, discovering new vaccines, and curing previously incurable diseases. And the department’s approach to AI shows some signs of a new technological paradigm shift. The HHS AI Inventory reports a number of more ambitious projects, such as using this technology to more quickly identify drug safety concerns or to study the genome of the malaria parasite. These are AI tools that have the potential to truly change the type of work that doctors, epidemiologists, and medical researchers do. AlphaFold, the protein folding algorithm for which Google DeepMind developers recently won a Nobel Prize, is currently being used by researchers around the world, including at HHS, to advance drug discovery.
Still, generative AI will not immediately enhance the internal workings of HHS. (Even something as proven as AlphaFold only speeds up part of a very long drug discovery process.) This is probably a good thing. The technology has come a long way, but it is not ready to completely reshape one of the world’s most influential public health institutions. If HHS continues to stick to a gradual approach to AI implementation, it could result in significant improvements that are invisible to most people.
However, RFK Jr. may not be interested in stopping there. Many use cases are still being deployed or piloted, and government AI databases are full of jargon and clichés that can often be interpreted in different ways. When the administration says that AI is being or could be used to “review the global influenza vaccine literature” or analyze data in the Vaccine Adverse Event Reporting System, the end result may or may not be benign. When President Kennedy talks about using AI to eliminate fraud, he may mean using that technology to lay off 10,000 more employees essential to the nation’s public health infrastructure. Inventorying outlines the means, not the motives. But in at least one of the use cases listed, the design is explicitly political. HHS is deploying AI to identify positions that violate President Trump’s executive orders on “ending radical and wasteful government DEI programs” and “defending women from gender ideological extremism.”
Generative AI is definitely a tool for bureaucratic efficiency and scientific research. But a more pressing question than the capabilities of the technology is what purpose it will be used to achieve.
