Abstract

Demand response (DR) is an approach that encourages consumers to shape consumption patterns in peak demands for the reliability of the power system and cost minimization. The optimal DR scheme has not only leverages the distribution system operators (DSOs) but also the consumers in the energy network. This paper introduced a multi-agent coordination control and reinforcement learning approach for optimal DR management. Each microgrid is considered an agent for the state and action estimation in the smart grid and programmed rewards and incentive plans. In this regard, the Multi-agent Markov game (MAMG) is utilized for the state and action. At the same time, the reward is articulated through reinforcement learning deep Q-network (DQN) and deep deterministic policy gradient (DDPG) schemes. The proposed DR model also encourages consumer participation for long-term incentivized benefits through integrating battery energy storage systems (BESS) in the SG network. The reliability of DQN and DDPG schemes is demonstrated and observed that the dynamically changing electricity cost is reduced by 19.86%. Moreover, the controllability of complex microgrids is achieved with limited control information to ensure the integrity and reliability of the network. The proposed schemes were simulated and evaluated in MATLAB and Python (PyCharm IDE) environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.