Abstract

The development of virtual coupling technology provides solutions to the challenges faced by urban rail transit systems. Train tracking control is a crucial component in the operation of virtual coupling, which plays a pivotal role in ensuring the safe and efficient movement of trains within the train and along the rail network. In order to ensure the high efficiency and safety of train tracking control in virtual coupling, this paper proposes an optimization algorithm based on Soft Actor-Critic for train tracking control in virtual coupling. Firstly, we construct the train tracking model under the reinforcement learning architecture using the operation states of the train, Proportional Integral Derivative (PID) controller output, and train tracking spacing and speed difference as elements of reinforcement learning. The train tracking control reward function is designed. Then, the Soft Actor-Critic (SAC) algorithm is used to train the virtual coupling train tracking reinforcement learning model. Finally, we took the Deep Deterministic Policy Gradient as the comparison algorithm to verify the superiority of the algorithm proposed in this paper.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.