National Grid turns to AI to address cyber risks and regulatory complexity

AI News


National Grid is exploring how artificial intelligence (AI) can help risk and compliance teams monitor cybersecurity threats and regulatory changes across complex infrastructure systems.

Speaking at the ServiceNow AI Summit in London, Jody Elliott, head of risk and sustainability at the energy infrastructure operator, said AI is becoming a critical tool for analyzing operational data at a scale that is difficult to manage with human teams.

Power companies like National Grid operate vast digital assets that support their electricity transmission networks in the UK and parts of the US. This environment generates large amounts of data across hundreds of technology projects, making it difficult for risk teams to maintain oversight.

“Large organizations have multiple Agile projects running,” says Elliott. “From a risk perspective, how can you capture all the stories and features of all the planning sessions and all the backlogs that are running continuously?”

He added that it is not practical to directly embed a risk expert in every project and organizations will rely on governance frameworks and policies to monitor development activities. Generative AI (gen AI) provides a more efficient way to analyze these environments.

“Generative AI in particular gives us the opportunity to analyze all unstructured data,” Elliott said, explaining that the technology can surface new risks across development backlogs and operational systems.

Rather than manually reviewing thousands of updates, AI tools can identify the most critical issues and flag them for investigation. This allows risk teams to focus on areas where security and regulatory issues are most likely to arise.

Prioritize cybersecurity threats

National Grid is also testing AI tools designed to improve vulnerability management across its technology assets.

The organization already collects extensive endpoint data from systems on its network, including information about operating systems and patch levels. However, correlating that data with information about newly disclosed vulnerabilities can take time.

“Humans can do it, but it takes time and it’s a full-time job,” Elliott said.

To address this, the company has developed an AI agent that automatically combines endpoint data with information about known vulnerabilities and exploit reports. The system analyzes these data sources in near real-time to identify the most significant security risks.

“We built the agent in about an hour,” says Elliott. After the operation started, it took “about 90 seconds to execute and output the results.”

The operations team then spent several days validating the results and confirming the accuracy of the analysis. The main advantage of this approach is that it incorporates business context into cybersecurity decision-making.

If you like this content…

“Overlaying this with human resources data allows organizations to identify whether a vulnerable device belongs to a senior executive or a critical operational team,” Elliott said.

This allows security teams to prioritize remediation efforts based on potential business impact, not just technical severity.

“Where AI really improves is the business context part,” he said.

Monitoring regulatory changes

Another area where National Grid is experimenting with AI is regulatory compliance.

Energy companies operate under an extensive regulatory framework spanning multiple jurisdictions, and teams must monitor changes in legislation and ensure internal policies are followed.

Elliott said the company has developed an AI agent that tracks regulatory updates across multiple sources, including UK government policy changes and regulatory developments in the US states where the company operates.

The system scans updates from frameworks such as SIP, SOX, and PCI and compares them to the organization’s internal control structure. By analyzing 12 months of regulatory updates and predicting future developments, this tool helps identify areas where policy and management changes are needed.

“Their representatives are looking at 12 months of updates on all of these regulations,” Elliott said, while also analyzing the company’s management framework to determine “what needs to change.”

This analysis is also forward-looking, giving the team a positive outlook on regulatory trends in the year ahead.

Balance between speed and reliability

Elliott said that despite the potential benefits, organizations need to ensure that employees understand the limitations of AI systems. One challenge is the risk that staff will begin to trust AI output without questioning it.

“People run the risk of becoming subject matter experts when they’re not,” he says.

To address this, National Grid has implemented an AI training program across the organization, targeting employees from executives to technical experts. The aim is to ensure staff understand how AI systems work and that human judgment is still important.

“It’s not a one-and-done thing,” Elliott said. “We need to continually strengthen that.”



Source link