Abstract

Traditional mapping modules in Visual Simultaneous Localization and Mapping (i.e. Visual SLAM) systems can only estimate 3D information of isolated sparse or semi-dense feature points. But there are lots of object instances in the environments which geometric information can be utilized to enhance the quality of mapping and localization. Hence, it is required for the Visual SLAM system to utilize high-dimensional features like object instances or structural lines in mapping and localization. To meet the gap between the above requirements and the traditional implementation of Visual SLAM systems, we present in this paper a novel Visual SLAM method that can effectively utilize texture-less object instances for mapping and localization. The proposed Visual SLAM method includes newly designed feature extraction, matching, localization and mapping modules, which jointly use object features and point features to estimate camera 6-DOF poses and do richer map construction. A group of organized raster points is used to represent objects during feature matching and pose estimation process in the proposed Visual SLAM pipeline. Owing to the object feature fusion in the co-visibility graph it could conduct scale aware bundle adjustments to reduce accumulated error. The advantages of proposed Visual SLAM method are demonstrated through experiments conducted both on synthetic datasets and real-world datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call