Abstract

Through shared real-time traffic information and perception of complex environments, connected and automated vehicles (CAVs) are endowed with global decision-making capabilities far beyond that of human drivers. Given more traffic lights information, this planning ability can be greatly strengthened. This study proposed an adaptive speed planning method of CAVs based on multi-light trained deep reinforcement learning (DRL), aiming to improve the fuel economy and comfort of CAVs. By setting a reasonable reward function, the training algorithm takes the key environmental information received by the vehicle as inputs, and finally outputs the acceleration that maximizes the cumulative reward. The results show that the trained DRL agent can adapt to variable scenarios with traffic lights, and quickly solve the approximate optimal speed trajectory. Multi-light DRL models save 6.79% fuel compared with single-light ones and have better performance on fuel economy and computational efficiency than a non-RL method using multi-light optimization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call