Abstract

This study proposes a new adaptive traffic signal control scheme to effectively manage dynamically fluctuating traffic flows through intersections. A spatial-temporal representation of the traffic state at an intersection has been designed to efficiently identify traffic patterns from complex intersection environments, and a deep neural network (long short-term memory network, LSTM) is used to determine look-ahead signal control decisions based on the estimated long-term feedback from a given traffic state. The actor-critic algorithm, one of the reinforcement learning-based algorithms, is adopted to obtain the essential parameters of the LSTM deep neural network through multiple interactions between a simulated environment and the corresponding adaptive traffic signal controller. A realistic model environment comprising a 24-hour time-varying traffic demand including rush hour and non-rush hour situations served as the basis for traffic generation in the numerical experiments to confirm the effectiveness of the proposed scheme. The results of these experiments show that, compared to an optimized fixed time plan (Synchro), the proposed scheme can reduce waiting times at intersections by an astounding 50% with consequential benefits of reducing fuel consumptions, emissions, queue lengths, and vehicle delays whilst increasing mean speeds.

Highlights

  • It is generally accepted that the efficient management of traffic flows to reduce travel delay, especially through intersections, is an essential objective in traffic management

  • We provide a framework for the Reinforcement Learning (RL) that is specially designed for adaptive traffic control

  • In this paper, we propose a new adaptive traffic signal control scheme to produce optimized traffic control policies in order to minimize the delay of vehicles passing through intersections

Read more

Summary

INTRODUCTION

It is generally accepted that the efficient management of traffic flows to reduce travel delay, especially through intersections, is an essential objective in traffic management. The Annual Average Daily Traffic (AADT), obtained from historical observation data, is used to provide aggregate information for traffic signal control, we found that few algorithms can exactly take into account the 24-hour time-varying traffic demands, arguably needed to further enhance the design of traffic signal control schemes Addressing these three problems, the research underpinning this study establishes a fundamental decision-making framework which uses a Reinforcement Learning (RL) approach supported by a deep neural network to implement an adaptive traffic control algorithm [17], [18]. The contribution of this paper can be concluded as follows: Firstly, in previous RL studies, the consideration of individual vehicular delay information for processing by traffic controllers was usually ignored Addressing this omission, we propose a novel traffic state definition to identify both the spatial and temporal patterns using microscopic traffic delay information.

LITERATURE REVIEW
CELL-BASED INTERSECTION LAYOUT
DELAY AND COST FUNCTION
STATE DEFINITION
REINFORCEMENT LEARNING FOR TRAFFIC SIGNAL CONTROL
NEURAL NETWORK STRUCTURE USING THE LSTM
PARAMETER OPTIMIZATION USING ACTOR-CRITIC ALGORITHM
MULTISTEP BOOTSTRAPPING TECHNIQUE
CLIPPED SURROGATE OBJECTIVE TECHNIQUE
EXPERIMENTS AND RESULTS
RESULTS ANALYSIS
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call