Abstract

Traffic signal control is an essential and chal-lenging real-world problem, which aims to alleviate traffic congestion by coordinating vehicles' movements at road in-tersections. Deep reinforcement learning (DRL) combines deep neural networks (DNNs) with a framework of reinforcement learning, which is a promising method for adaptive traffic signal control in complex urban traffic networks. Now, multi-agent deep reinforcement learning (MARL) has the potential to deal with traffic signal control at a large scale. However, current traffic signal control systems still rely heavily on simplified rule- based methods in practice. In this paper, we propose: (1) a MARL algorithm based on Nash Equilibrium and DRL, namely Nash Asynchronous Advantage Actor-Critic (Nash-A3C); (2) an urban simulation environment (SENV) to be essentially close to the real-world scenarios. We apply our method in SENV, obtaining better performance than benchmark traffic signal control methods by 22.1%, which proves that Nash-A3C to be more suitable for large industrial level deployment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call