Abstract

This paper proposes a deep reinforcement learning-based approach to optimally manage the different energy resources within a microgrid. The proposed methodology considers the stochastic behavior of the main elements, which include load profile, generation profile, and pricing signals. The energy management problem is formulated as a finite horizon Markov Decision Process (MDP) by defining the state, action, reward, and objective functions, without prior knowledge of the transition probabilities. Such formulation does not require explicit model of the microgrid, making use of the accumulated data and interaction with the microgrid to derive the optimal policy. An efficient reinforcement learning algorithm based on deep Q-networks is implemented to solve the developed formulation. To confirm the effectiveness of such methodology, a case study based on a real microgrid is implemented. The results of the proposed methodology demonstrate its capability to obtain online scheduling of various energy resources within a microgrid with optimal cost-effective actions under stochastic conditions. The achieved costs of operation are within 2% of those obtained in the optimal schedule.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call