Abstract

Steering a wheeled mobile robot through a variety of environments is a complex task. To achieve this, many researchers have tried to convert front-facing camera data stream to the corresponding steering angles based on convolutional neural network model (CNN). However, most of existing methods suffer from higher cost of data acquisition and longer training cycles. To address these issues, this paper proposes an innovative end-to-end deep neural network model that fully considers the temporal relationships in the data and incorporates long short-term memory (LSTM) based on the CNN model. In addition, to obtain enough data to train and test the model, we establish a simulation system capable of creating realistic environment with various weather and road conditions and avoiding static and dynamic obstacles for robots. First, we use the system to capture the raw image sequence in different environments as a training set, and then we test the trained model in the system to realize an autonomous mobile robot that can adapt to various environments. The experimental results demonstrate that the proposed model not only can extract effectively and fully the features of road vision information with the highest correlation for navigation, but also can learn the time dependence of motion states and image features contained in a sequence.

Highlights

  • Wheeled mobile robots have a wide range of applications in the fields of smart homes, autonomous driving evacuation guiding [1], [2] and transportation due to their efficient and intelligent auto navigation

  • Robots use local path planning methods such as ant colony algorithm [3] and artificial potential field [4] to avoid obstacles in real time based on the information acquired by their sensors, but these approaches tend to be trapped in local optima and lead to path oscillations

  • Compared with the convolutional neural network model (CNN), it fully considers the temporal dependencies of the samples, and the training can be terminated with less epochs by using the early stop method

Read more

Summary

Introduction

Wheeled mobile robots (hereinafter referred to as robots) have a wide range of applications in the fields of smart homes, autonomous driving evacuation guiding [1], [2] and transportation due to their efficient and intelligent auto navigation. The guarantee of achieving this is their embedded pathfinding algorithm which is able to help the robot perform safely and quickly the operation of turning, obstacle avoidance and speed control. These operations all require a great number of computing resource. How to ensure the real-time and robustness of the pathfinding algorithm becomes important issues that need to be addressed. In response to these issues, relevant scholars at home and abroad have carried out a great deal of research and made. Hybrid path planning methods [5] combined with global map information and local optimization strategies significantly reduce the probability of a robot falling into a local optimum and improves the efficiency of path planning, but it is difficult to fully perceive complex and variable environments and does not take full advantage of visual features as important pathfinding information

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.