Abstract

The construction of microgrid has fully promoted the large-scale access of distributed power and the rapid development of electric vehicles (EVs). And microgrid system faces operational issues such as strong random disturbances from distributed power sources and loads, as well as unexpected events during operation. These issues can lead to unstable microgrid frequency, excessive discharge of EVs, and increased control costs. To solve the issues, a frequency cooperative control strategy for multimicrogrids with EVs based on improved evolutionary-deep reinforcement learning (EDRL) is proposed in this paper. First, consider the effect of the vehicle-to-grid (V2G) process on the minimum time required to fully charge an EV, and the impact of the output distribution of micro turbines (MTs) on the regulation cost, and based on the coupling relationship between generator terminal voltage regulation and system frequency control, a multimicrogrid comprehensive control model is constructed. Second, in order to deal with engineering tasks with deceptive rewards and sparse rewards such as integrated frequency control, evolutionary algorithms and deep reinforcement learning algorithms are combined to effectively assist the training process to jump out of local optimal solutions and approach optimal control strategies. Meanwhile, the algorithm is improved based on the novelty search and intelligent partition strategy, so that the strategy has better convergence characteristics and can reduce the cost of information transmission and computational complexity under the premise of ensuring the control effect. Furthermore, the state space, action space and reward function of the controller are defined. Finally, the simulation results show that the proposed controller has the ability of coordinated control, can effectively reduce the adjustment cost of MT unit and the unnecessary discharging of EVs while ensuring the frequency regulation requirements of each submicrogrid, which is far superior to PID control, fuzzy control, and traditional deep reinforcement learning control.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.