Abstract

In high-speed railways, the pantograph-catenary system (PCS) is a critical subsystem of the train power supply system. In particular, when the double-PCS (DPCS) is in operation, the passing of the leading pantograph (LP) causes the contact force of the trailing pantograph (TP) to fluctuate violently, affecting the power collection quality of the electric multiple units (EMUs). The actively controlled pantograph is the most promising technique for reducing the pantograph-catenary contact force (PCCF) fluctuation and improving the current collection quality. Based on the Nash equilibrium framework, this study proposes a multiagent reinforcement learning (MARL) algorithm for active pantograph control called cooperative proximity policy optimization (Coo-PPO). In the algorithm implementation, the heterogeneous agents play a unique role in a cooperative environment guided by the global value function. Then, a novel reward propagation channel is proposed to reveal implicit associations between agents. Furthermore, a curriculum learning approach is adopted to strike a balance between reward maximization and rational movement patterns. An existing MARL algorithm and a traditional control strategy are compared in the same scenario to validate the proposed control strategy's performance. The experimental results show that the Coo-PPO algorithm obtains more rewards, significantly suppresses the fluctuation in PCCF (up to 41.55%), and dramatically decreases the TP's offline rate (up to 10.77%). This study adopts MARL technology for the first time to address the coordinated control of double pantographs in DPCS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call