Abstract
Deep reinforcement learning has strong perception and decision-making capabilities that can effectively solve the problem of continuous high-dimensional state-action space and has become the mainstream method in the field of traffic light timing. However, due to model structural defects or different strategic mechanisms of models, most deep reinforcement learning models have problems such as convergence and divergence or poor exploration capabilities. Therefore, this paper proposes a multi-agent Soft Actor–Critic (SAC) for traffic light timing. Multi-agent SAC adds an entropy item to measure the randomness of the strategy in the objective function of traditional reinforcement learning and maximizes the sum of expected reward and entropy item to improve the model’s exploration ability. The system model can learn multiple optimal timing schemes, avoid repeated selection of the same optimal timing scheme and fall into a local optimum or fail to converge. Meanwhile, it abandons low reward value strategies to reduce data storage and sampling complexity, accelerate training, and improve the stability of the system. Comparative experiments show that the method based on multi-agent SAC traffic light timing can solve the existing problems of deep reinforcement learning and improve the efficiency of vehicles passing through in different traffic scenarios.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Transportation Engineering, Part A: Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.