Abstract

The acceleration of urbanization has led to increasingly severe traffic congestion, creating an urgent need for effective traffic signal control strategies to improve road efficiency. This paper proposes an adaptive traffic signal control method based on offline reinforcement learning (Offline RL) to address the limitations of traditional fixed-time signal control methods. By monitoring key parameters such as real-time traffic flow and queue length, the proposed method dynamically adjusts signal phases and durations in response to rapidly changing traffic conditions. At the core of this research is the design of a model named SD3-Light, which leverages advanced offline reinforcement learning to predict the optimal signal phase sequences and their durations based on real-time intersection state features. Additionally, this paper constructs a comprehensive offline dataset, which enables the model to be trained without relying on real-time traffic data, thereby reducing costs and improving the model’s generalization ability. Experiments conducted on real-world traffic datasets demonstrate the effectiveness of the proposed method in reducing the average travel time. Comparisons with several existing methods highlight the clear advantages of our approach in enhancing traffic management efficiency.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.