Abstract

In this letter, we present a novel navigation system of unmanned ground vehicle (UGV) for local path planning based on deep reinforcement learning. The navigation system decouples perception from control and takes advantage of multi-modal perception for a reliable online interaction with the surrounding environment of the UGV, which enables a direct policy learning for generating flexible actions to avoid collisions with obstacles in the navigation. By replacing the raw RGB images with their semantic segmentation maps as the input and applying a multi-modal fusion scheme, our system trained only in simulation can handle real-world scenes containing dynamic obstacles such as vehicles and pedestrians. We also introduce a modal separation learning to accelerate the training and further boost the performance. Extensive experiments demonstrate that our method closes the gap between simulated and real environments, exhibiting the superiority over state-of-the-art approaches. Please refer to https://vsislab.github.io/mmpbnv1/ for the supplementary video demonstration of UGV navigation in both simulated and real-world environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call