Abstract

The integration of renewable energy, such as solar photovoltaics (PV), is critical to reducing carbon emissions but has exerted pressure on power grid operations. Microgrids with buildings, distributed energy resources, and energy storage systems are introduced to alleviate these issues, where optimal operation is necessary to coordinate different components on the grid. Model predictive control (MPC) and reinforcement learning (RL) have been proven capable of solving such operation problems in proof-of-concept studies. However, their applications in real-world buildings are limited by the low reproducibility and the high implementation costs. There is a lack of systematic and quantitative understanding of their strength and weakness in actual applications. Hence, this study aims to improve the scalability of optimal control solutions for smart grid operations by comparing MPC and RL regarding their requirements and control performance. We leveraged the CityLearn simulation framework to implement and compare alternative control solutions based on MPC and RL for the energy management of microgrids. In addition to the control performance of cost saving and carbon reduction, other factors such as robustness and transferability were also examined. While both methods achieved promising results, MPC had slightly better performance and could be transferred more smoothly. Given the standardized framework, MPC is more suitable in most cases for the purpose of microgrid operations. However, RL could be preferable for its quickness in making decisions if a large number of energy systems are involved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call