Abstract

Multi-Agent Reinforcement Learning (MARL) has shown strong advantages in urban multi-intersection traffic signal control, but it also suffers from the problems of non-smooth environment and inter-agent coordination. However, most of the existing research on MARL traffic signal control has focused on designing efficient communication to solve the environment non-smoothness problem, while neglecting the coordination between agents. In order to coordinate among agents, this paper combines MARL and the regional mixed-strategy Nash equilibrium to construct a Deep Convolutional Nash Policy Gradient Traffic Signal Control (DCNPG-TSC) model, which enables agents to perceive the traffic environment in a wider range and achieves effective agent communication and collaboration. Additionally, a Multi-Agent Distributional Nash Policy Gradient (MADNPG) algorithm is proposed in this model, which is the first time the mixed-strategy Nash equilibrium is used for the improvement in the Multi-Agent Deep Deterministic Policy Gradient algorithm traffic signal control strategy to provide the optimal signal phase for each intersection. In addition, the eco-mobility concept is integrated into MARL traffic signal control to reduce pollutant emissions at intersections. Finally, simulation results in synthetic and real-world traffic road networks show that DCNPG-TSC outperforms other state-of-the-art MARL traffic signal control methods in almost all performance metrics, because it can aggregate the information of neighboring agents and optimize the agent’s decisions through gaming to find an optimal joint equilibrium strategy for the traffic road network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call