Abstract

Visual navigation is required for many robotics applications, ranging from mobile robotics for movement manipulation to automated driving. One of the visual navigation technologies that are often used is path planning. This method considers a way to find a valid configuration sequence to move from the starting point to the destination point. Deep reinforcement learning (DRL) provides a mapless trainable approach by integrating path planning, localization and image processing in a single module. Therefore the approach can be optimized for a specific environment. However, DRL-based navigation is mostly validated in a simple simulation environment with a size that is not too large. Therefore, we propose a new visual navigation architecture method using deep reinforcement learning. We have designed a realistic simulation framework that resembles a room’s state with several models of goods in it. Agents in the simulator will carry out the learning process by applying deep reinforcement learning to path planning with the support of A2C network, LSTM and auxiliary tasks. We evaluated the agent’s method in a simulation framework conducted 10 times, and each experiment was carried out in 1000 randomly generated environments. Training takes about 18 hours on a single GPU. The result is that in the broader simulation environment, our method has a success rate of 99.81% in finding the destination of a given image. These results make the proposed method can be applied to a broader environment and this approach can be used towards human-robot collaboration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call