Abstract

The robot simultaneous localization and mapping (SLAM) is a very important and useful technology in the robotic field. However, the environmental map constructed by the traditional visual SLAM method contains little semantic information, which cannot satisfy the needs of complex applications. The semantic map can deal with this problem efficiently, which has become a research hot spot. This paper proposed an improved deep residual network- (ResNet-) based semantic SLAM method for monocular vision robots. In the proposed approach, an improved image matching algorithm based on feature points is presented, to enhance the anti-interference ability of the algorithm. Then, the robust feature point extraction method is adopted in the front-end module of the SLAM system, which can effectively reduce the probability of camera tracking loss. In addition, the improved key frame insertion method is introduced in the visual SLAM system to enhance the stability of the system during the turning and moving of the robot. Furthermore, an improved ResNet model is proposed to extract the semantic information of the environment to complete the construction of the semantic map of the environment. Finally, various experiments are conducted and the results show that the proposed method is effective.

Highlights

  • With the rapid development of computer technology and sensor technology, the research and application of robots have reached a new height [1,2,3,4]

  • simultaneous localization and mapping (SLAM) technology of robot has made some achievements, such as SLAM based on laser radar and sonar, SLAM based on robot vision, and so on [8, 9]. e visual SLAM is one of the most used SLAM technologies, which can be divided into monocular SLAM, binocular SLAM, and multivision SLAM [10, 11]

  • The traditional SLAM maps can help robots to locate themselves, they lack the understanding of the environment required for specific tasks, namely, semantic information. e robot semantic SLAM technology can deal with this problem, so more and more research has focused on the robot semantic SLAM method

Read more

Summary

Introduction

With the rapid development of computer technology and sensor technology, the research and application of robots have reached a new height [1,2,3,4]. For mobile robots, when facing an unknown environment, they need to use their own sensor devices to sense the surrounding environment, build an environment map by moving, and determine their positions in the map; this is called robot simultaneous localization and mapping (SLAM) problem [5,6,7]. The traditional SLAM maps can help robots to locate themselves, they lack the understanding of the environment required for specific tasks, namely, semantic information. E robot semantic SLAM technology can deal with this problem, so more and more research has focused on the robot semantic SLAM method. Civera et al [12] combined the target recognition method with the SLAM method of monocular vision based on extended Kalman filtering and ran the two threads simultaneously to achieve semantic SLAM. The requirements on the real-time and readability performance of the semantic SLAM are difficult to meet at the same time

Methods
Findings
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.