Abstract

Simultaneous localization and mapping (SLAM) in dynamic environments is an important problem in robotics navigation, yet it is less studied. In this paper, we present a novel approach to segment and track multiple moving objects in real dynamic environments. Detected objects are classified into stationary and moving objects using a state-of-the-art method referred to as multilevel-RANSAC (ML-RANSAC) algorithm. The algorithm is designed to track moving objects in conflict situations while running SLAM. The ML-RANSAC algorithm is developed to robustly estimate velocity and position of the multiple moving objects in an unknown environment whereas the state of the objects (static or dynamic) is not known a priori. The main characteristic of the algorithm is its ability to address both static and dynamic objects in SLAM and to detect and track moving objects (DATMO) without dividing the problem into two separate parts (SLAM and DATMO). We apply the proposed algorithm on two sets of simulated data to validate its performance in situations where the objects are either occluded or placed in dense dynamic scenario. We have compared our method with the true data via simulation studies. Furthermore, we have implemented the algorithm on a Pioneer P3-DX mobile robot navigating a real dynamic environment. Simulation studies as well as real-time experiments suggest that the algorithm is able to track and classify objects accurately while performing SLAM in dynamic environments.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.