Abstract
In the last years, deep learning and reinforcement learning methods have significantly improved mobile robots in such fields as perception, navigation, and planning. But there are still gaps in applying these methods to real robots due to the low computational efficiency of recent neural network architectures and their poor adaptability to robotic experiments' realities. In this article, we consider an important task in mobile robotics - navigation to an object using an RGB-D camera. We develop a new neural network framework for robot control that is fast and resistant to possible noise in sensors and actuators. We propose an original integration of semantic segmentation, mapping, localization, and reinforcement learning methods to improve the effectiveness of exploring the environment, finding the desired object, and quickly navigating to it. We created a new HISNav dataset based on the Habitat virtual environment, which allowed us to use simulation experiments to pre-train the model and then upload it to a real robot. Our architecture is adapted to work in a real-time environment and fully implements modern trends in this area.
Highlights
R EAL-TIME navigation, path planning, localization and object avoidance are the major challenges of mobile robots [1]
Using mean Average Precision (mAP) value of instance segmentation for all 40 object classes we have selected the best models of each type
Our framework includes modern neural network methods of instance segmentation, localization, and mapping, which allow the robot to solve its subtasks in real-time
Summary
R EAL-TIME navigation, path planning, localization and object avoidance are the major challenges of mobile robots [1]. Often such problems are solved by methods that do not use machine learning [2]–[4]. It is necessary that robot had the opportunity to learn in the course of performing any actions in the environment. It can be considered as an intelligent agent [5] whose behavior is synthesized by the control architecture, which includes the learnable subsystems
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.