Reinforcement Learning Research
NREL's reinforcement learning (RL) research focuses on large, complex energy systems that challenge traditional methodology, including fleet management and controls for wind farms, grid-interactive buildings, autonomous vehicles, and smart homes.
To facilitate the use of RL for these increasingly large, complex systems across the board, NREL researchers are leveraging NREL's state-of-the-art computing resources.
High-Performance Computing for Reinforcement Learning
As RL is applied to larger, more complex, and more realistic systems, implementing RL algorithms also becomes more complex. In these cases, RL requires specialized graphics processing units and many more cores than can be found on a standard laptop.
NREL researchers have developed deep expertise in leveraging NREL's high-performance computing (HPC) facilities to implement RL algorithms. The research team has built out functionality to run an RL algorithm on multiple nodes of NREL's Eagle computing system in parallel with the open-source RL library, RLLib.
A Framework for Multi-Agent Reinforcement Learning in Power Systems
NREL computational science researchers are developing software to facilitate the application of RL in various settings. One notable example is PowerGridworld, an open-source software package that provides users with a modular, customizable framework for multi-agent RL focused on power systems. Many frameworks exist for multi-agent RL, but none can rapidly prototype and develop the environments themselves, making PowerGridworld a valuable tool for power-systems-focused RL and a promising avenue for future collaborations.
Designing Hybrid Reinforcement Learning Approaches
NREL researchers are exploring ways to solve energy systems control problems by combining RL with traditional approaches. Although RL has several advantages—including bypassing the need for a mathematical model of the underlying system—it has disadvantages and challenges as well. By combining traditional approaches with novel RL methods, NREL researchers are bringing together the advantages of each and developing new capabilities for solving energy systems control problems.
Featured Projects
Learn about some RL projects NREL researchers are undertaking.
Energy-Efficient Buildings
NREL is using RL to increase energy efficiency in a variety of applications, including buildings. Many recent advances in building operation rely on algorithms that are highly sensitive to inaccuracies in the underlying building models, making this approach difficult to replicate across building types and hindering scalability. NREL researchers are using RL to tackle these shortcomings. Because RL does not require knowledge of the underlying building model, it does not suffer from the same replicability and stability issues as traditional, model-driven techniques. The anticipated impacts of this work are far-reaching. They include improving building efficiency, understanding building occupancy patterns, and increasing grid stability and resiliency.
Next-Generation Compression-Ignition Engines
Another RL-related research project focuses on increasing the efficiency of compression-ignition engines. Conventional compression-ignition engines have high thermal efficiencies, but the extensive exhaust gas treatments they require have the opposite effect, decreasing efficiency. One promising strategy for decreasing emissions involves splitting the fuel injection into a series of smaller injections. NREL researchers are using RL to optimize this process and maximize the engine's performance while decreasing emissions.
Battery Materials Discovery
NREL researchers are developing an RL-based approach to design new battery materials. Currently, candidate material compositions must be identified manually—an expensive, time-consuming process. RL techniques are used to automate the identification of potential new material compositions, making it faster and easier to identify new battery materials and facilitating rapid improvements in battery performance.
Publications
Two-Stage Reinforcement Learning Policy Search for Grid-Interactive Building Control, IEEE Transactions on Smart Grid (2022)
An Edge-Cloud Integrated Solution for Buildings Demand Response Using Reinforcement Learning, IEEE Transactions on Smart Grid (2020)
Transferable Reinforcement Learning for Smart Homes, 1st International Workshop on Reinforcement Learning for Energy Management in Buildings & Cities (2020)
Curriculum-Based Reinforcement Learning for Distribution System Critical Load Restoration, IEEE Transactions on Power Systems (2022)
A Hybrid Reinforcement Learning-MPC Approach for Distribution System Critical Load Restoration, IEEE Power & Energy Society General Meeting (2022)
Powergridworld: A framework for Multi-Agent Reinforcement Learning in Power Systems, Thirteenth ACM International Conference on Future Energy Systems (2022)
Deep Reinforcement Learning Based Volt-VAR Optimization in Smart Distribution Systems, IEEE Transactions on Smart Grid (2021)
Distributed Reinforcement Learning with ADMM-RL, American Control Conference (2019)
Deep Reinforcement Learning for Dynamic Control of Fuel Injection Timing in Multi-Pulse Compression Ignition Engines, International Journal of Engine Research (2021)
A Modular and Transferrable Reinforcement Learning Framework for the Fleet Rebalancing Problem, Transactions on Intelligent Transportation Systems (2021)
Deep Reinforcement Learning for Automatic Generation Control of Wind Farms, American Control Conference (2021)
Proof-of-Concept of a Reinforcement Learning Framework for Wind Farm Energy Capture Maximization in Time-Varying Wind, Journal of Renewable and Sustainable Energy (2021)
A Distributed Reinforcement Learning Yaw Control Approach for Wind Farm Energy Capture Maximization, American Control Conference (2020)
Open-Source Software
Rlc4clr: Reinforcement Learning Controller for Critical Load Restoration Problems
Contacts
Share