Abstract

Traffic signal control (TSC) is an established yet challenging engineering solution that alleviates traffic congestion by coordinating vehicles’ movements at road intersections. Theoretically, reinforcement learning (RL) is a promising method for adaptive TSC in complex urban traffic networks. However, current TSC systems still rely heavily on simplified rule-based methods in practice. In this paper, we propose: (1) two game theory-aided RL algorithms leveraging Nash Equilibrium and RL, namely Nash Advantage Actor–Critic (Nash-A2C) and Nash Asynchronous Advantage Actor–Critic (Nash-A3C); (2) a distributed computing Internet of Things (IoT) architecture for traffic simulation, which is more suitable for distributed TSC methods like the Nash-A3C deployment in its fog layer. We apply both methods in our computing architecture and obtain better performance than benchmark TSC methods by 22.1% and 9.7% reduction of congestion time and network delay, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call