Abstract

Robot navigation is a fundamental problem in robotics and various approaches have been developed to cope with this problem. Despite the great success of previous approaches, learning-based methods are receiving growing interest in the research community. They have shown great efficiency in solving navigation tasks and offer considerable promise to build intelligent navigation systems. This paper presents a goal-directed robot navigation system that integrates global planning based on goal-directed end-to-end learning and local planning based on reinforcement learning (RL). The proposed system aims to navigate the robot to desired goal positions while also being adaptive to changes in the environment. The global planner is trained to imitate an expert’s navigation between different positions by goal-directed end-to-end learning, where both the goal representations and local observations are incorporated to generate actions. However, it is trained in a supervised fashion and is weak in dealing with changes in the environment. To solve this problem, a local planner based on deep reinforcement learning (DRL) is designed. The local planner is first implemented in a simulator and then transferred to the real world. It works complementarily to deal with situations that have not been met during training the global planner and is able to generalize over different situations. The experimental results on a robot platform demonstrate the effectiveness of the proposed navigation system.

Highlights

  • IntroductionRobot navigation techniques are ubiquitous in domestic tasks since the ability to navigate efficiently within an environment is the prerequisite to complete many motion-related tasks

  • Robot navigation techniques are ubiquitous in domestic tasks since the ability to navigate efficiently within an environment is the prerequisite to complete many motion-related tasks.Robot navigation tasks can be concluded as to endow a robot with the ability to move from its current position to a designated goal location based on the sensory inputs from its onboard sensors.Conventional approaches [1,2,3,4] usually solve this problem by dividing it into several different phases including map building, localization, obstacle detection, and path planning

  • We propose a navigation system that consists of a global planner and a local planner which are both based on learning-based approaches

Read more

Summary

Introduction

Robot navigation techniques are ubiquitous in domestic tasks since the ability to navigate efficiently within an environment is the prerequisite to complete many motion-related tasks. The action policy trained by the goal-directed end-to-end learning depends on the structure of a particular environment and enables the robot to navigate to the goal on the global scale by making correct turning at certain intersections based on the goal position. Objects placed randomly on the floor temporally change the local structure, but the global structure of the environment stays In these scenarios, the action policy trained by the end-to-end learning probably will not work well in these particular areas.

Related Work
Robot Navigation Based on Supervised Learning
Robot Navigation Based on Reinforcement Learning
Combing Global and Local Planners for Robot Navigation
Goal-Directed End-to-End Learning
Implementation of the Goal-Directed End-to-End Learning
Reinforcement Learning for Local Object Avoidance
Deep Q-Learning
Double Q-Learning
Dueling Q-Learning
Implementation of D3QN
Switching Between Two Different Strategies
Experimental Setup
Data Preparation and Training
Results of the Global Planner
Local Planner for Object Avoidance
Training in Simulation
Results in the Simulation
Results in the Real World
Combining the Global and Local Planners
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.