To improve intersection traffic flow and reduce vehicle energy consumption and emissions at intersections, a signal optimization method based on deep reinforcement learning (DRL) is proposed. The algorithm uses Rainbow DQN as the core framework, incorporating vehicle position, speed, and acceleration information into the state space. The reward function simultaneously considers two objectives: reducing vehicle waiting times and minimizing carbon emissions, with the vehicle queue length as a weighted factor. Additionally, an ACmix module, which integrates self-attention mechanisms and convolutional layers, is introduced to enhance the model’s feature extraction and information representation capabilities, improving computational efficiency. The model is tested using an actual intersection as the study object, with a signal intersection simulation built in SUMO. The proposed approach is compared with traditional Webster signal timing, actuated signal timing, and control strategies based on DQN and D3QN models. The results show that the proposed strategy, through real-time signal timing adjustments, reduces the average vehicle waiting time by approximately 27.58% and the average CO2 emissions by about 7.34% compared with the actuated signal timing method. A comparison with DQN and D3QN models further demonstrates the superiority of the proposed model, achieving a 15% reduction in average waiting time and a 6.5% reduction in CO2 emissions. The model’s applicability is validated under various scenarios, including different proportions of electric vehicles and traffic volumes. This study aims to provide a flexible signal control strategy to enhance intersection vehicle flow and reduce carbon emissions. It offers a reference for the development of green, intelligent transportation systems and holds practical significance for promoting urban carbon reduction efforts.
Read full abstract