Abstract

Abstract Simultaneous localization and mapping (SLAM) is the process of estimating the trajectory of a mobile sensor carrier and creating a representation of its surroundings. Traditional SLAM algorithms are based on "static world assumption" and simplify the problem by filtering out moving objects or tracking them separately in complex dynamic environments. However, this strong assumption restricts the application of SLAM algorithms on highly dynamic and unstructured environments. In order to resolve above problem, this paper propose an improved object-aware dynamic SLAM system by integrating image information, i.e., semantic and velocity information. Firstly, we adopt deep learning method to detect both the 2D and 3D bounding boxes of objects in the environment. This information is then used to perform multi-view, multi-dimensional bundle optimization to jointly refine the poses of camera, object, and point. Secondly, 2D detection results from image and 3D detection results from lidar are integrated by the joint probabilistic data association (JPDA) data association algorithm to facilitate object-level data association. We also calculate 2D and 3D motion velocity and this information is used to constraint the motion of the object. Finally, we perform comprehensive experiments on different datasets, including NCLT, M2DGR, and KITTI to prove the performance of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.