Abstract
The increasing use of distributed and renewable energy resources presents a challenge for traditional control methods due to the higher complexity and uncertainty brought by these new technologies. To address these challenges, rein-forcement learning algorithms are used to design and implement an energy management system (EMS) for different microgrids configurations. Reinforcement Learning (RL) approach seeks to train an agent from their interaction with the environment rather than from direct data such as in supervised learning. With this in mind, the problem of energy management is be posed as a Markov decision process and it is solved using different state-of-the-art Deep Reinforcement Learning (DRL) algorithms, such as Deep Q-Networks (DQN), Proximal Policy Optimization (PPO) and Twin Delayed Deep Deterministic Policy Gradient (TD3). Additionally, these results are compared with traditional EMS implementations such as Rule-Based and Model Predictive Control (MPC) used as benchmarks. Simulations are run with the novel Pymgrid module build for this purpose. Preliminary results show that RL agents have comparable results to the classical implementations with some possible benefits for generic and specific use cases.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.