Security of Artificial Intelligence

NREL researchers work collaboratively across cybersecurity and artificial intelligence (AI), with a focus on energy security to strengthen the grid against evolving threats.

Ensuring the security of artificial intelligence builds on NREL's expertise in explainable and trustworthy AI, formal verification, red teaming of defenses against cyberattacks, generative AI, and extreme events. NREL researchers are leading this emerging area of research to build resilient and secure energy systems and to solve evolving cybersecurity challenges.

Three NREL cybersecurity researchers stand in front of a wall-sized projection demonstrating the Advanced Research on Integrated Energy Systems (ARIES).

Advanced Emulation for Cyber Evaluation

NREL’s Advanced Research on Integrated Energy Systems platform provides unique simulation and emulation capabilities—including interfaces with real devices and infrastructure—to be able to generate cyber physical data at scale. The Advanced Research on Integrated Energy Systems Cyber Range also provides emulation and virtualization capabilities to execute novel red team testing on AI models in a safe and isolated environment.

Detecting and Mitigating Cyber Attacks

In a recent development on the security of artificial intelligence, Lawrence Berkeley National Laboratory is leading the Mitigation via Analytics for Grid-Inverter Cybersecurity project in partnership with NREL, Siemens Corporate Technologies, Cornell Tech, and the National Rural Electric Cooperative Association Research. Sponsored by the U.S. Department of Energy’s Office of Cybersecurity, Energy Security, and Emergency Response, the project is developing secure artificial intelligence/machine learning tools to both detect and mitigate cyber-attacks on aggregations of distributed energy resources in electric power distribution systems and microgrids. The Mitigation via Analytics for Grid-Inverter Cybersecurity project is funded by the Fiscal Year 2023 Bipartisan Infrastructure Legislation (Section 40125b Research Call). 

This work includes development of attack detection and mitigation algorithms, of a hardware-in-the-loop testbed and demonstration test plan, and of a red team software test environment for AI models. Testing, algorithm refinement, and demonstration for algorithm hardware-in-the-loop will follow in phase two.

As part of this project, NREL and Cornell Tech are developing a structured red teaming framework that will assess flaws and vulnerabilities of AI models and improve the security of such systems to address cybersecurity threats. The red teaming process—conducted in a controlled environment and in collaboration with AI developers—will adopt adversarial methods to stress test each AI system for flaws and vulnerabilities and lead to more robust attack detection and mitigation algorithms.

Contact

Ryan King

Senior Scientist, Applied Mathematics, Computational Science

Ryan.King@nrel.gov
303-275-4182

Richard Macwan

Manager, Cyber-Resilience Science Group

Richard.Macwan@nrel.gov
303-275-4579

Share