Indian Health Service CISO highlights AI as a tool to ‘make better decisions’

AI News


For Indian Health Service Chief Information Security Officer Benjamin Koshy, artificial intelligence has the potential to automate “mundane things” and bring advanced cyber capabilities across IHS’ distributed network.

Cossey said in a recent interview with Federal News Network that IHS has a “very broad geographic footprint,” with medical facilities and personnel spanning 37 states, from the bottom of the Grand Canyon to north of the Arctic Circle.

“We have a huge variety of different types of data sets that need to be protected,” Cossey said.

IHS is also in the midst of a major electronic health record (EHR) system modernization. The agency is transitioning from an EHR based on a system built for the Veterans Health Administration in the 1980s to a cloud-based Oracle health system.

Moving to a cloud environment also requires a change in the mindset of the IHS cybersecurity team.

“I think all in-house security professionals know that the way you protect something on-premises is very different from the way you protect something in the cloud,” Cossey says. “You have to deal with a much larger attack surface, but it also helps with availability and redundancy.”

Cossey said IHS is looking to AI to automate “mundane, mundane steps,” from analyzing logs to automating cybersecurity strategies.

Cossey says AI can also help deploy more advanced capabilities, such as behavioral analysis and dynamic threat models, beyond routine. He said AI can help organizations “baseline” their security data and alert them to deviations from that baseline.

“I think that’s the way we have to move forward. Baseline what your normal is and if there are deviations from that normal, make the effort to make things really understandable not only to the human eye but also to the AI. Is this going to be my new normal or is this a targeted attack that we need to investigate?” Cossey said.

While the rapid advancement of AI’s cybersecurity capabilities has raised concerns that junior analyst jobs will be lost, Cossey said IHS has historically been short-staffed in security.

“To be clear, I don’t want to lose anyone,” Cossey said. “For me, AI is just another tool that analysts use. At the end of the day, it’s better to have a level 1 analyst who knows how to use AI to do their job more effectively than not have any AI at all.”

AI security training

As IHS implements AI in its cybersecurity operations, Cossey said it is encouraging staff to take AI training courses to understand how the technology works. He compared this to previous generations of cybersecurity professionals who had to know how to manage security vulnerabilities using Microsoft Excel.

“In the future, we will need to know how to use AI to support our security operations. Whether it is to help draft policies, use AI to ensure that all use cases are properly implemented and have all the necessary references, create firewall rules or automate certain processes, we will need to know how to use AI,” said Koshy.

“How does the AI ​​do that? And what should we look at as analysts to review the AI’s behavior? And if it appears that the AI ​​has misbehaved, how can we understand the AI’s thought process and fix that problem so that the AI ​​behaves the way we want it to behave?” he continued.

Koshy said AI knowledge will be needed across all aspects of the cybersecurity business, from architecture, engineering, incident response, policy, security awareness, and even risk and compliance.

“Whether it’s creating work products or automating processes, it just helps us become more efficient,” Cossey says. “Having that knowledge is going to be essential to making sure we’re working effectively and ensuring that the physical staff who are engaged in this kind of work are using their time wisely, using AI to automate tasks that I don’t want them to work on, and focusing on the things that I want them to work on.”

Balance automation and manual control

AI training and experience are especially important when security teams deploy agent AI. Koshy said the key is to balance automation with human oversight.

“Eventually, AI will replace some of the functions that analysts currently perform from time to time. Analysts will be freed up to focus on things they really don’t want agent AI to do,” he said. “There may be some specific sensitive cases involving large PII datasets that may still ultimately require a human eye.”

Cossey said his approach uses “AI as a tool to help make better decisions.”

“We’re not relying on AI to make decisions,” he added.

Managing AI risks requires trained staff who are experts in the field and understand when an AI-based system may not provide the correct answer or output.

“I don’t want to blindly trust agent AI,” Cossey says. “We want to make sure that we don’t have fear when the appropriate experts reviewing these processes have to question the AI ​​and consider exactly what data the AI ​​looked at and why it didn’t consider this or other things. But ultimately, as it evolves, we’re going to at least implement agent AI into the security process and humans evolve within that process. They may be pushed to a higher level, but there’s always going to be some sort of balance between the two.”

Copyright © 2026 Federal News Network. Unauthorized reproduction is prohibited. This website is not directed to users within the European Economic Area.





Source link