Abstract

Autonomous mapless navigation is very challenging since it is difficult to effectively find the way without a map. In this article, we address a problem encountered when realizing reinforcement learning (RL)-based autonomous mapless navigation of a mobile vehicle with depth sensors when only local depth images and relative target positions are available. This study aims to determine the velocity of the mobile vehicle to avoid collisions and reach the final destination as soon as possible. To accomplish this objective, a new approach, called hindsight-intermediate-target-based RL (HIT-RL), to realize autonomous mapless navigation is proposed to effectively take detour routes to the destination based solely on recent local observations. The proposed HIT-RL generates virtual intermediate targets that guide the mobile vehicle to the final destination. The velocity of the mobile vehicle, as the action of RL, is determined to aim for the intermediate targets, and trained with a neural network to reach the original target by rewarding for it. Navigation tasks realized with an aerial vehicle, or a drone, that the proposed HIT-RL allows the mobile vehicle to effectively escape from dead ends without a map and, hence, reach the final destination within a reasonable time. The navigation performance is also illustrated with an actual mobile vehicle system to show that the proposed HIT-RL exhibits good performance in practical environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call