Abstract
AbstractIn an era characterized by the rapid proliferation of distributed flexible resources (DFRs), the development of customized energy management and regulation strategies has attracted significant interest from the field. The inherent geographical dispersion and unpredictability of these resources, however, pose substantial barriers to their effective and computationally tractable regulation. To address these impediments, this paper proposes a deep reinforcement learning‐based distributed resource energy management strategy, taking into account the inherent physical and structural constraints of the distribution network. This proposed strategy is modelled as a sequential decision‐making framework with a Markov decision process, informed by physical states and external information. In particular, targeting the community energy management system for critical public infrastructure and community holistic benefits maximization, the proposed approach proficiently adapts to fluctuations in resource variability and fluctuating market prices, ensuring intelligent regulation of distributed flexible resources. Simulation and empirical analysis demonstrate that the proposed deep reinforcement learning‐based strategy can improve the economic benefits and decision‐making efficiency of distributed flexible resource regulation while ensuring the security of distribution network power flow.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.