Abstract

AbstractControlling an autonomous vehicle's unprotected left turn at an intersection is a challenging task. Traditional rule‐based autonomous driving decision and control algorithms struggle to construct accurate and trustworthy mathematical models for such circumstances, owing to their considerable uncertainty and unpredictability. To overcome this problem, a rule‐constrained reinforcement learning (RCRL) control method is proposed in this work for autonomous driving. To train a reinforcement learning controller with rule constraints, outcomes of the path planning module are used as a goal condition in the reinforcement learning framework. Since they include vehicle dynamics, the proposed approach is safer and more reliable compared to end‐to‐end learning, thereby ensuring that the generated trajectories are locally optimal while adjusting to unpredictable situations. In the experiments, a highly randomized two‐way four‐lane intersection is established based on the CARLA simulator to verify the effectiveness of the proposed RCRL control method. Accordingly, the results show that the proposed method can provide real‐time safe planning and ensure high passing efficiency for autonomous vehicles in the unprotected left turn task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call