Abstract

In various dynamic scenes, there are moveable objects such as pedestrians, which may challenge simultaneous localization and mapping (SLAM) algorithms. Consequently, the localization accuracy may be degraded, and a moving object may negatively impact the constructed maps. Maps that contain semantic information of dynamic objects impart humans or robots with the ability to semantically understand the environment, and they are critical for various intelligent systems and location-based services. In this study, we developed a computationally efficient SLAM solution that is able to accomplish three tasks in real time: (1) complete localization without accuracy loss due to the existence of dynamic objects and generate a static map that does not contain moving objects, (2) extract semantic information of dynamic objects through a computionally efficient approach, and (3) eventually generate semantic maps, which overlay semantic objects on static maps. The proposed semantic SLAM solution was evaluated through four different experiments on two data sets, respectively verifying the tracking accuracy, computational efficiency, and the quality of the generated static maps and semantic maps. The results show that the proposed SLAM solution is computationally efficient by reducing the time consumption for building maps by 2/3; moreover, the relative localization accuracy is improved, with a translational error of only 0.028 m, and is not degraded by dynamic objects. Finally, the proposed solution generates static maps of a dynamic scene without moving objects and semantic maps with high-precision semantic information of specific objects.

Highlights

  • A dynamic scene contains various moveable objects, such as pedestrians. It is important for intelligent systems and location-based services [1] to recognize in real time such dynamic objects and construct scene maps with semantic information of dynamic objects

  • This study addresses the impact of dynamic objects on simultaneous localization and mapping (SLAM), and it detects and removes moving objects from the scene and generates the static map of a dynamic scene without moving objects

  • The proposed solution incorporates the semantic information of objects with the SLAM process, and it eventually generates semantic maps of a dynamic scene, which is significant for semantic-information-based intelligent applications

Read more

Summary

Introduction

A dynamic scene contains various moveable objects, such as pedestrians. It is important for intelligent systems and location-based services [1] to recognize in real time such dynamic objects and construct scene maps with semantic information of dynamic objects. A robot needs to understand what objects are in the scene and where such objects are located so that the robot can perform more intelligently This task can be accomplished by combining a semantic segmentation network and mobile mapping. Semantic SLAM methods combine visual SLAM with semantic segmentation networks, and they have high computational complexity, which is challenging for real-time systems. A semantic SLAM solution is proposed, and it achieves high-precision single-object semantic segmentation and generates semantic maps of a dynamic scene in real time. The proposed solution incorporates the semantic information of objects with the SLAM process, and it eventually generates semantic maps of a dynamic scene, which is significant for semantic-information-based intelligent applications.

Overview of the Framework
Semantic Segmentation of Objects
Object Detection with Deep Learning
Contour Extraction of Moving Objects
Generating Static Maps Using LUT-Based SLAM
Tracking
Estimating Moving Direction of Images
Loop Closure
Global Bundle Adjustment
Objective
Mapping with LUT
Experiments
Tracking Accuracy
Estimating Moving Direction and Mapping with LUT
Constructing Static Maps of Dynamic Scenes
Constructing Semantic Maps
Findings
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.