Abstract

Safe and efficient control for autonomous vehicles (AVs) at on-ramp merging sections is challenging. Previous work about on-ramp merging focuses on discrete behavioral decisions and longitudinal motion control, while the lane-changing (LC) behavior is usually oversimplified. To bridge the research gaps, an on-ramp merging optimization control framework (ORMOC) based on a deep reinforcement learning (DRL) approach is proposed in the paper, which jointly optimizes the control of lane-keeping (LK) and LC. In the framework, firstly, a LC agent based on DRL and quintic polynomial-based trajectory model is trained to generate a smooth LC trajectory. Then, a LK agent combined with the LC agent is trained by sharing learning, for longitudinal motion control LC decision execution. Finally, a priority-based safety supervisor is developed to enhance the safety of the control output. The proposed method is examined using SUMO simulation experiments, and the results demonstrate that compared to the TTC (time to collision)-based method, the proposed DRL-ORMOC framework optimizes the travel time by 8.3%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.