Abstract

Planning appropriate driving trajectory for route following is an important function for autonomous driving. Behavioral cloning, which allows automatic trajectory learning and improvement, has been effectively used in driving trajectory planning. However, existing behavioral cloning methods always rely on large scales of time-consuming, laborious, and reliable labels. To address this problem, this paper proposes a new off-policy imitation learning method for autonomous driving using task knowledge distillation. This novel method clones human driving behavior and effectively transfers the driving strategies to domain shift scenarios. The experiment results indicate that our method can lead to satisfactory route-following performance in realistic urban driving scenes and can transfer the driving strategies to new unknown scenes under various illumination and weather scenarios for autonomous driving.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call