Abstract

With the integration of artificial intelligence and traffic systems, intelligent traffic systems are utilizing enhanced perception coverage and computational capabilities to provide data-intensive solutions, achieving higher levels of performance than traditional systems. This paper combines the D3QN algorithm from deep reinforcement learning with practical issues and proposes an intelligent emergency traffic signal control system based on Deep Reinforcement Learning (DRL). The system takes into account pedestrian movement and utilizes real-time traffic data and environmental information to model traffic flow and road conditions within a novel state space. It employs the Dueling Double Deep Q-Network (D3QN) to optimize signal control strategies. The system dynamically adjusts signal timings to enhance operational efficiency at intersections. By using the Weibull distribution to simulate realistic traffic congestion and actual traffic data from Shanyin Road in Hangzhou for validation, the results demonstrate that this method converges faster and is more stable compared to other methods, significantly reducing traffic congestion. Furthermore, by incorporating pedestrian movement, this method reduces pedestrian waiting times by 44.736% during peak periods and 22.95% during off-peak periods, while maintaining comparable vehicle queue lengths, delay times, and carbon dioxide emissions. This approach shows the potential improvement of smart urban mobility and resolving intersection congestion challenges.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call