Abstract
On-ramp merge is a complex traffic scenario in autonomous driving. Because of the uncertainty of the driving environment, most rule-based models cannot solve such a problem. This paper designs a ramp merging decision model based on deep deterministic policy gradient algorithm (DDPG) to solve the vehicle merging problem. To address the problems of slow algorithm merging and poor robustness of previous deep reinforcement learning algorithms in the field of intelligent vehicle ramp merging leading to the low success rate of intelligent vehicle merging, first, we introduce a simple recurrent unit (SRU) for extracting intelligent vehicle states and environment features and use the DDPG algorithm for intelligent vehicle decision making. Second, the experience playback pool of DDPG algorithm is improved by using priority sampling instead of uniform sampling. Finally, a multi-objective reward function is set up during training, considering factors such as safety and efficiency. The simulation experiments show that the improved algorithm improves the merging speed of the model, reduces the collision rate, and enables the vehicle to make more reasonable decisions. In addition, the superiority of the method is demonstrated by comparing with the advanced method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.