Abstract

Tactical planning is crucial for safe and efficient driving on the highway. However, the problem is complicated by the uncertain intention of surrounding vehicles, as well as observation noise caused by measurement noise and perception errors. Rule-based tactical planning methods are ineffective in handling dynamic scenarios with uncertainty, and susceptible to observation noise. To tackle this problem, we propose a hierarchical tactical planning framework based on residual reinforcement learning. Besides, a new reinforcement learning from demonstrations scheme that views rule-based methods as soft guidance is developed to combine prior knowledge with data-driven methods. Based on the framework and the training scheme, rule-based methods not only can be improved in highway scenarios with uncertainty and observation noise, but also will guide the training procedure for increased sampling efficiency. Additionally, to boost in-depth and consistent exploration in a vehicle system with inertia, we employ noisy networks to explore the optimal policy. The proposed method is validated in a stochastic and uncertain simulation environment, and the results reveal that our method outperforms both rule-based methods and pure data-driven methods in terms of safety and driving efficiency under noisy observations and uncertainty.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.