Abstract

Learning-based approaches, such as reinforcement learning (RL) and imitation learning (IL), have indicated superiority over rule-based approaches in complex urban autonomous driving environments, showing great potential to make intelligent decisions. However, current RL and IL approaches still have their own drawbacks, such as low data efficiency for RL and poor generalization capability for IL. In light of this, this paper proposes a novel learning-based method that combines deep reinforcement learning and imitation learning from expert demonstrations, which is applied to longitudinal vehicle motion control in autonomous driving scenarios. Our proposed method employs the soft actor-critic structure and modifies the learning process of the policy network to incorporate both the goals of maximizing reward and imitating the expert. Moreover, an adaptive prioritized experience replay is designed to sample experience from both the agent’s self-exploration and expert demonstration, in order to improve sample efficiency. The proposed method is validated in a simulated urban roundabout scenario and compared with various prevailing RL and IL baseline approaches. The results manifest that the proposed method has a faster training speed, as well as better performance in navigating safely and time-efficiently.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call