Abstract

In this study, a novel residential virtual power plant (RVPP) scheduling method that leverages a gate recurrent unit (GRU)-integrated deep reinforcement learning (DRL) algorithm is proposed. In the proposed scheme, the GRU- integrated DRL algorithm guides the RVPP to participate effectively in both the day-ahead and real-time markets, lowering the electricity purchase costs and consumption risks for end-users. The Lagrangian relaxation technique is introduced to transform the constrained Markov decision process (CMDP) into an unconstrained optimization problem, which guarantees that the constraints are strictly satisfied without determining the penalty coefficients. Furthermore, to enhance the scalability of the constrained soft actor-critic (CSAC)-based RVPP scheduling approach, a fully distributed scheduling architecture was designed to enable plug-and-play in the residential distributed energy resources (RDER). Case studies performed on the constructed RVPP scenario validated the performance of the proposed methodology in enhancing the responsiveness of the RDER to power tariffs, balancing the supply and demand of the power grid, and ensuring customer comfort.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call