Abstract

Microgrids are useful solutions for integrating renewable energy resources and providing seamless green electricity to minimize carbon footprint. In recent years, extreme weather events happened often worldwide and caused significant economic and societal losses. Such events bring uncertainties to the microgrid energy scheduling problems and increase the challenges of microgrid operation. Traditional optimization approaches suffer from the inaccuracy of the uncertain microgrid model and the unseen events. Existing reinforcement learning (RL) - based approaches are also hampered by the limited generalization and the increasing computational burden when stochastic formulations are required to accommodate the uncertainties. This paper proposes a new parallelized reinforcement learning (PRL) method based on the probabilistic events to handle the microgrid energy uncertainties. Specifically, several local learning agents are employed to interact with pertinent microgrid environments in a distributed manner and report outcomes to the global agent, which will optimize microgrid energy resources online during extreme events. The stochastic microgrid energy optimization problem is reformulated to include all possible scenarios with probabilities. The advantage estimate functions of learning agents are designed with a backward sweep to transfer the outcomes to the value function updating process. Two simulation studies, stochastic optimization and online testing, are performed to compare with several existing RL approaches. Results substantiate that the proposed PRL method can achieve up to 20% optimization performance improvement with 4 and 28 times less computation cost than Q-learning with experience replay and multi-agent Q-learning approaches, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call