Abstract

Simultaneous localization and mapping (SLAM) problem has been extensively studied by researchers in the field of robotics, however, conventional approaches in mapping assume a static environment. The static assumption is valid only in a small region, and it limits the application of visual SLAM in dynamic environments. The recently proposed state-of-the-art SLAM solutions for dynamic environments use different semantic segmentation methods such as mask R-CNN and SegNet; however, these frameworks are based on a sparse mapping framework (ORBSLAM). In addition, segmentation process increases the computational power, which makes these SLAM algorithms unsuitable for real-time mapping. Therefore, there is no effective dense RGB-D SLAM method for real-world unstructured and dynamic environments. In this study, we propose a novel real-time dense SLAM method for dynamic environments, where 3D reconstruction error is manipulated for identification of static and dynamic classes having generalized Gaussian distribution. Our proposed approach requires neither explicit object tracking nor object classifier, which makes it robust to any type of moving object and suitable for real-time mapping. Our method eliminates the repeated views and uses consistent data that enhance the performance of volumetric fusion. For completeness, we compare our proposed method using different types of high dynamic dataset, which are publicly available, to demonstrate the versatility and robustness of our approach. Experiments show that its tracking performance is better than other dense and dynamic SLAM approaches.

Highlights

  • Simultaneous localization and mapping (SLAM) is to produce a consistent map of environment and to estimate the pose in the map using noisy range sensor measurements

  • SLAM problem has been extensively studied by researchers in the field of robotics

  • The evaluation is performed through the metrics proposed by Sturm et al.[3] as translational, rotational relative pose error (RPE), and translational absolute trajectory error (ATE)

Read more

Summary

Introduction

Simultaneous localization and mapping (SLAM) is to produce a consistent map of environment and to estimate the pose in the map using noisy range sensor measurements. Palazzolo et al.[4] propose refusion, where dynamics detection is done using the residuals obtained from the registration on SDF This approach can create a consistent mesh of the environment, highly dynamical change deteriorates mapping performance. Kim and Kim[20] propose to use the difference between depth images to eliminate the dynamics in the scene This algorithm requires an optimized background estimator suitable for parallel processing. Flow fusion[32] uses optical flow residuals with PWC-Net[33] for dynamic and static human objects Such approaches are relying heavily on prior training methods. The number of similar frames will be higher, which decreases the unnecessary computational power This is the novel enhancement we provide to existing methods in the literature for the betterment of the performance.

12: Increment iteration number
16: Extract mesh
Experiments
Method
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call