Abstract

Exploiting reinforcement learning (RL) for traffic congestion reduction is a frontier topic in intelligent transportation research. The difficulty in this problem stems from the inability of the RL agent simultaneously monitoring multiple signal lights when taking into account complicated traffic dynamics in different regions of a traffic system. Such challenge is even more outstanding when forming control decisions on a large-scale traffic grid, where the RL action space grows exponentially with the number of intersections within the traffic grid. In this paper, we tackle such a problem by proposing a cooperative deep reinforcement learning (Coder) framework. The intuition behind Coder is to decompose the original difficult RL task as a number of subproblems with relatively easy RL goals. Accordingly, we implement Coder with multiple regional agents and a centralized global agent. Each regional agent learns its own RL policy and value functions over a small region with limited actions. Then, the centralized global agent hierarchically aggregates RL achievements from different regional agents and forms the final Q -function over the entire large-scale traffic grid. The experimental investigations demonstrate that the proposed Coder could reduce on average 30% congestions in terms of the number of waiting vehicles during high density traffic flows in simulations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call