Abstract

AbstractControlling an autonomous vehicle's unprotected left turn at an intersection is a challenging task. Traditional rule‐based autonomous driving decision and control algorithms struggle to construct accurate and trustworthy mathematical models for such circumstances, owing to their considerable uncertainty and unpredictability. To overcome this problem, a rule‐constrained reinforcement learning (RCRL) control method is proposed in this work for autonomous driving. To train a reinforcement learning controller with rule constraints, outcomes of the path planning module are used as a goal condition in the reinforcement learning framework. Since they include vehicle dynamics, the proposed approach is safer and more reliable compared to end‐to‐end learning, thereby ensuring that the generated trajectories are locally optimal while adjusting to unpredictable situations. In the experiments, a highly randomized two‐way four‐lane intersection is established based on the CARLA simulator to verify the effectiveness of the proposed RCRL control method. Accordingly, the results show that the proposed method can provide real‐time safe planning and ensure high passing efficiency for autonomous vehicles in the unprotected left turn task.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.