TAIGR: Test AI limits on power grid

AI News


Artificial intelligence (AI) is widely adopted in several industries, including key areas of grid manipulation. Integrating AI into a grid system increases efficiency and reliability, but also poses many new challenges.

To address these challenges, the Idaho National Laboratory (INL) has begun testing the AI Grid Resilience (TAIGR) initiative, which aims to identify and mitigate risks in AI-enhanced grid management systems.

Advanced automation of AI and grid systems

For years, utilities have relied on statistical methods and algorithms to manage maintenance planning, load forecasting and grid monitoring. Today, AI is taking these efforts to a new level. By analyzing a wide range of historical data, including specific control records, AI can predict outages and suggest real-time actions that operators should take.

However, these advancements come with significant risks, including believable AI hallucinations, plausible but false recommendations, and adversarial cyberattacks that manipulate data or model architectures. These risks can lead to false decisions that destabilize the grid.

“AI quickly analyzes the mountains of history from operator logs and trend predictions, suggesting the optimal use of sources such as natural gas, nuclear, wind, or all combinations.” “However, AI can hallucinate and recommend behaviours that human experts know not to take.”

Teguru's mission

To address these emerging issues, TAIGR brings together industry stakeholders, operators, owners, suppliers, regulators and researchers to bridge gaps between these communities and create a collaborative environment to address the AI-related challenges inherent in grid operations.

Earlier this year, INL took Teglu's first important step by holding its first workshop to attract stakeholders across the energy sector. Representatives from utilities, vendors, federal regulators and academic institutions participated in a two-day detailed discussion on the role of AI in grid management. The workshop highlighted both the urgent need for industry action and the knowledge gap.

Vendors and equipment manufacturers want to bring AI-powered tools to the market, but many acknowledged that there is no systematic way to assess the associated risks. Alex Jenkins, senior AI solutions engineer at Grid Systems Supplier Aveva, suggested that future researcher industry collaborations will include supplier input to broaden the exchange of ideas and perspectives on AI safety.

Based on feedback from the workshop, INL researchers and industry partners launched the Amaranth (Artificial Intelligence Management and Research for Advanced Networked TestBed Hub) project. Amaranth is funded by the Department of Energy (DOE) Grid Deployment Office and focuses on addressing key industry concerns regarding AI safety and resilience.

INL research data scientist Patience Yockey emphasized the importance of thorough testing of AI systems to ensure reliability and predictability. “We need to understand the possible weaknesses not only in data but also in AI design and implementation,” Yocky says. “We want to make AI systems reliable to help create consistent power under all conditions.”

Building models and communities

TAIGR is modelled on DOE's successful Cytrics program. Cytrics enhances cybersecurity for critical infrastructures by rigorously testing and assessing industrial control systems and supply chains. Similarly, TAIGR operates as a voluntary and collaborative program with transparent methodologies vetted by experts in the utility industry, national laboratories and federal government.

Discussions with major industry trade groups such as the Edison Electric Institute, North American Electric Reliability Corporation and the Electric Power Research Institute help shape the parameters, Bochman said. There is a strong interest in collaborating to address AI-related challenges.

“They all want to be involved and contribute,” he said.

Scott Aaronson, senior vice president of Energy Security & Industry Operations at Edison Electric Institute, highlighted the importance of INL's role.

“Edison Electric Institute and our member electric companies are investigating the full capabilities of advanced AI systems and working to engage with all stakeholders who develop, strengthen and ensure the safe and secure integration of those systems,” Aaronson said. “InL's work with TAIGR is an important part of the process and will help you better understand how to take advantage of these systems while reducing risk.”

Benefits of Testing

The TAIGR project also benefits from INL's unique infrastructure. This includes a full-scale power grid testbed designed to safely test your energy systems. “This is why national labs exist,” added Boffman. “We can't find it anywhere else to solve the tough problems that require expertise and infrastructure.”

INL has a long history of leadership in the assessment and testing of critical and critical energy systems. The dedicated laboratory test grid can process up to 138 kilovolts and can support advanced power load testing, smart grid evaluation and energy storage experiments. This cutting-edge infrastructure now supports testing and demonstration of AI-enhanced grid technology in a controlled environment.

Looking ahead, Taigr's working group develops testing approaches and procedures that attract asset owners and vendors' participation. This collaboration aims to accelerate the safe and reliable adoption of AI solutions in the energy sector.

The road ahead

Taigr is a momentary initiative designed to help the energy sector balance the benefits of AI with its inherent risks. Even when AI technology is integrated through grid operations, it is utmost important that the country's power system remains robust and secure. Tigers are not about resisting AI, they are not about preparing responsibly.

Given the important role of the grid in the economy, national security and public safety, it is essential to understand how AI-enhanced technologies can have positive and negative effects.

“AI is too efficient to overlook and useless to ignore,” Boffman said. “Even so, if AI is fed into bad data or its results are not checked, it may be unreliable, especially when dealing with critical infrastructure.”

Disclaimer: Aaas and Eurekalert! We are not responsible for the accuracy of news releases posted to Eurekalert! To contribute to the institution or use information via the Eurecolor system.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *