Abstract

This paper addresses the problems related to the mapless navigation control of wheeled mobile robots based on deep learning technology. The traditional navigation control framework is based on a global map of the environment, and its navigation performance depends on the quality of the global map. In this paper, we proposes a mapless Light Detection and Ranging (LiDAR) navigation control method for wheeled mobile robots based on deep imitation learning. The proposed method is a data-driven control method that directly uses LiDAR sensors and relative target position for mobile robot navigation control. A deep convolutional neural network (CNN) model is proposed to predict motion control commands of the mobile robot without the requirement of the global map to achieve navigation control of the mobile robot in unknown environments. While collecting the training dataset, we manipulated the mobile robot to avoid obstacles through manual control and recorded the raw data of the LiDAR sensor, the relative target position, and the corresponding motion control commands. Next, we applied a data augmentation method on the recorded samples to increase the number of training samples in the dataset. In the network model design, the proposed CNN model consists of a LiDAR CNN module to extract LiDAR features and a motion prediction module to predict the motion behavior of the robot. In the model training phase, the proposed CNN model learns the mapping between the input sensor data and the desired motion behavior through end-to-end imitation learning. Experimental results show that the proposed mapless LiDAR navigation control method can safely navigate the mobile robot in four unseen environments with an average success rate of 75%. Therefore, the proposed mapless LiDAR navigation control system is effective for robot navigation control in an unknown environment without the global map.

Highlights

  • Navigation control is one of the core functions of autonomous mobile robots; it enables the robots to successfully complete the task of motion control and obstacle avoidance in the working environment

  • We propose a mapless Light Detection and Ranging (LiDAR) navigation control system based on an end-to-end convolutional neural network (CNN) model, which is trained through imitation learning

  • Experimental results show that the proposed LiDAR navigation control system based on end-to-end imitation learning achieves good navigation control performance of the wheeled mobile robot

Read more

Summary

INTRODUCTION

Navigation control is one of the core functions of autonomous mobile robots; it enables the robots to successfully complete the task of motion control and obstacle avoidance in the working environment. Qiang et al [10] proposed a model-free mapless navigation method for mobile robots based on reinforcement learning They designed an end-to-end navigation model that uses LiDAR sensor data as input and one of the five possible motion actions as output. Tai et al [13] proposed a learning-based mapless motion planner, which takes the laser scan data and the relative target position as input and the continuous navigation commands as output They applied an asynchronous deep reinforcement learning method to train an end-to-end mapless motion planner in the virtual environment. Pfeiffer et al [14] proposed a target-driven mapless navigation policy based on a combination of imitation and reinforcement learning They designed a neural network model, which uses the sensor measurements and the relative target position as input and the required control commands as output.

SYSTEM ARCHITECTURE
PREPARATION OF TRAINING DATASET
EXPERIMENTAL RESULTS
CONCLUSION AND FUTURE WORK
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call