Abstract

This work aims to propose the use of deep reinforcement learning for mobile robot navigation and obstacle avoidance in previously unknown areas or without pre-made maps with continuous action control to increase the capabilities of mobile robots beyond conventional map-based navigation. Deep reinforcement learning is used to enable the robot to learn how to make decisions and interact with the environment observed from sensor data to safely navigate itself to its destination. The robot has a two-dimensional laser scanner, ultrasonic sensors and odometry sensor. Deep Deterministic Policy Gradient, which can function in continuous action space, was chosen as the deep reinforcement learning model. The robot is trained and tested in a Gazebo simulator with Robot Operating System. After the training process, the robot is put to the challenge to complete a waypoint navigation mission in four unknown areas as part of an assessment. The results indicate that the mobile robot is adaptable and has the capability of traveling to the specified waypoints and completing the job without the need for a pre-drawn route or an obstacle map in unidentified environments with the minimum success rate of 69.7 percent.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call