Abstract

Microgrid (MG) is an effective way to integrate renewable energy into power system at the consumer side. In the MG, the energy management system (EMS) is necessary to be deployed to realize efficient utilization and stable operation. To help the EMS make optimal schedule decisions, we proposed a real-time dynamic optimal energy management (OEM) based on deep reinforcement learning (DRL) algorithm. Traditionally, the OEM problem is solved by mathematical programming (MP) or heuristic algorithms, which may lead to low computation accuracy or efficiency. While for the proposed DRL algorithm, the MG-OEM is formulated as a Markov decision process (MDP) considering environment uncertainties, and then solved by the PPO algorithm. The PPO is a novel policy-based DRL algorithm with continuous state and action spaces, which includes two phases: offline training and online operation. In the training process, the PPO can learn from historical data to capture the uncertainty characteristic of renewable energy generation and load consumption. Finally, the case study demonstrates the effectiveness and the computation efficiency of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call