Abstract

Nowadays, the stationary energy storage systems (ESSs) are widely introduced to recover the regenerative braking energy in urban rail systems. And the multiple ESSs along the line, substations, traction, and braking trains in the traction power system make up a multienergy coupling system, whose energy efficiency is expected to be improved. With the aim of power flow optimization, a cooperative control strategy for multiple ESSs based on multiagent deep reinforcement learning is proposed in this article. Under the distributed control structure, the decision process of multiple ESS agents is formulated as a fully cooperative Markov game, in which each ESS makes independent decision and cooperates to improve the overall energy saving effect. And the value decomposition network, which decomposes the joint state-action value into value functions across agents, is adopted to stabilize the multiagent learning process. Three train operation scenarios are presented in simulation, and the power flow distributions are analyzed quantitatively to better evaluate the performance of the proposed cooperative strategy. A power hardware-in-the-loop (PHIL) experimental platform, which integrates the RT-LAB simulator and physical supercapacitor-based energy storage system (SCESS) is developed in this article to emulate the dc traction power system with multitrain operation. Based on the PHIL platform, the proposed cooperative control strategy is implemented experimentally. Both simulation and experimental results show that in comparison with genetic algorithm (GA), the proposed strategy optimizes the energy distribution between different SCESSs and trains, and improves the overall energy saving effect of the multi-SCESSs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call