Abstract

The paper is concerned with the autonomous navigation of mobile robot from the current position to the desired position only using the current visual observation, without the environment map built beforehand. Under the framework of deep reinforcement learning, the Deep Q Network (DQN) is used to achieve the mapping from the original image to the optimal action of the mobile robot. Reinforcement learning requires a large number of training examples, which is difficult to directly be applied in a real robot navigation scenario. To solve the problem, the DQN is firstly trained in the Gazebo simulation environment, followed by the application of the well-trained DQN in the real mobile robot navigation scenario. Both simulation and real-world experiments have been conducted to validate the proposed approach. The experimental results of mobile robot autonomous navigation in the Gazebo simulation environment show that the trained DQN can approximate the state action value function of the mobile robot and perform accurate mapping from the current original image to the optimal action of the mobile robot. The experimental results in real indoor scenes demonstrate that the DQN trained in the simulated environment can work in the real indoor environment, and the mobile robot can also avoid obstacles and reach the target location even with dynamics and the presence of interference in the environment. It is therefore an effective and environmentally adaptable autonomous navigation method for mobile robots in an unknown environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call