Abstract

The energy management (EM) of a microgrid is a challenging problem because of the stochasticity of renewable sources and loads, which can be formulated as a multi-stage decision-making problem. Reinforcement learning (RL), is found to be suitable for solving multi-stage decision-making problems under uncertainty. With the recent advancements in deferrable types of loads like that of electric vehicles, there is flexibility to manage peak demand, thereby saving the microgrid's operation cost. In this paper, we systematically explain for the first time how RL can be used to solve the scheduling problem of an isolated MG. A novel Q-learning algorithm employing a state-dependent action set is proposed to solve an isolated microgrid's EM problem in the presence of critical and deferrable loads. Various scenarios of wind turbine output power are generated to test the algorithm under a stochastic environment. In this work, the action set is divided into two: one with all the actions included and the second with a reduced number of actions. The entire state space is also divided into two: one set in which the deferrable load needs to be allotted and other where it need not be allotted. The advantage of such a method is that it substantially reduces the computational burden even if the state space's dimension is higher. Also, the judicious selection of reinforcement function to meet all the objectives and constraints is illustrated with case studies. The proposed method yields optimized scheduling with significant cost reductions compared to existing work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call