The U.S. Centers for Disease Control and Prevention on Thursday released strategies and guidance on the use of artificial intelligence to help guide the agency’s own efforts and provide resources to public health officials across the country.
These documents demonstrate a desire to accelerate technology adoption, ensure employees have access to technology, and ensure tools are properly managed. But what is more unique is that these publications encourage the use of AI “agents” or “deep research” that can perform specific tasks autonomously, something the CDC is already working on.
According to the Department of Health and Human Services’ recently reported inventory of AI use cases, nearly 10% of the CDC’s approximately 100 AI use cases in 2025 were agent tools. The share of agent use accounts for approximately one-third of such deployments across the sector.
As a result, CDC’s new strategy includes specific language to leverage its technology to support public health, enhance research and data management, and improve access to data. And at the same time, the agency released specific guidance for state, tribal, local, and territorial (STLT) public health officials on the use of AI agents in research based on experience gained from its own research.
“One of the things we most often ask for from partners is guidance on this technology,” Travis Hoppe, CDC’s acting chief AI officer, told FedScoop. He said the AI inventory shows where government agencies are using the technology, and the new document includes more information.
The promise of agent tools for agencies like the CDC is that they can move things beyond uses such as email summaries and prescriptive tasks, Hoppe said. For example, CDC already uses detailed research tools to speed up the process of reviewing literature, data, policy, and other sources to inform decision-making.
Hoppe said a common job for CDC staff is to spend three to four hours thoroughly researching a topic. These range from emerging and current events to longer-term and aimed at building a series of references. This is where the use of agent AI called “deep research” comes into play.
Hoppe said that as part of the investigation, the CDC conducted a lengthy internal review of the use of in-depth research and found that the tool was effective in some applications but limited in others. The agency then applied what it learned and began deploying the technology in these areas.
“What’s really exciting about this is that we found so many good uses through this tool, and so many places where this tool worked and where it fell short,” Hoppe says. “We wanted to convey that to our STLT partners.”
Specifically, the guidance recommends that partners use this technology when the scope of the problem is clearly defined, rapid information integration is required, their responses can be verified by experts, or rich sources of information, such as CDC publications, are used.
CDC recommended avoiding in-depth research using restricted data sources or data sources that contain sensitive data about people, especially when expert judgment is required or a comprehensive literature review is required.
First of all, CDC
Although the broader HHS already has its own AI strategy, this publication is a first for the CDC. But that doesn’t mean AI is a new technology for public health departments.
Hoppe told FedScoop that the document is a “formalization of policy” because the CDC has been leveraging AI and machine learning for public health at various levels “for decades.”
He said the HHS strategy released in December outlined the department’s higher-level priorities, but the CDC wanted to focus on those specific to public health. The CDC’s plan is intended to guide the agency for the next five years.
In addition to encouraging the use of agents, this strategy includes goals such as establishing control over third-party AI systems, prioritizing enterprise solutions at the departmental level, and leveraging options through the Office of Human Resources to recruit top AI talent.
A pillar of the Department’s strategy is support to state and local partners.
Hoppe said some agencies serve the public through service delivery or industry relationships, but in addition to serving the public, CDC also serves state and local partners.
In addition to agent research guidance, considerations for implementing generative AI were also published, also based on CDC’s own efforts. Notably, the CDC says it will be the first in the federal government to roll out ChatGPT to all employees in 2023, and counts its use to thwart Legionnaires’ disease and its AI chatbot among its successes.
As for where the CDC will go next, Hoppe said the agency will “remain on the front lines.” He said the agency is considering whether the technology could help with “operational readiness” within the CDC’s mission. This includes questions about how the model can be improved before agencies consider its use.
According to Hoppe, as technology continues to evolve rapidly, it’s important to think about what needs to change so agencies can revisit those tools to see if their needs are being met.
“I think that’s probably the most important thing for us, and that’s how I’ve driven all the technology,” Hoppe said. “We don’t look at technology and say, ‘Oh, this sucks. It doesn’t meet our standards.’ Rather, if these new things happen, it might meet our standards.”
