Abstract

Collision-free navigation without pre-built maps is a promising and challenging problem. To make full use of the information in the environment, we propose a multi-input mapless navigation framework that combines both vision-based and ranging-based approaches. It takes the position and orientation of the robot relative to the target as the reference, incorporating RGB-D images and distance measurements as state inputs to the deep reinforcement learning model. In the framework, Dual-VAE is designed to encode RGB images and depth images simultaneously, in order to effectively mine the intrinsic information of RGB-D images. Moreover, we engineer a special reward function that enables the robot to navigate with a smooth trajectory and leave redundant space with obstacles for other tasks. Target-driven multi-input mapless navigation framework has been experimentally proven to achieve satisfactory performance in simulation environments. Our source code and simulation environments are released at https://github.com/YosephYu/MultiInput-Mapless-Navigation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call