Abstract
Indoor localization is a prerequisite for autonomous robot applications in the construction industry. However, traditional localization techniques rely on low-level features and do not exploit construction-related semantics. They also are sensitive to environmental factors such as illumination and reflection rate, and therefore suffer from unexpected drifts and failures. This study proposes a pose graph relocalization framework that utilizes object-level landmarks to enhance a traditional visual localization system. The proposed framework builds an object landmark dictionary from Building Information Model (BIM) as prior knowledge. Then a multimodal deep neural network (DNN) is proposed to realize 3D object detection in real time, followed by instance-level object association with false-positive rejection, and relative pose estimation with outlier removal. Finally, a keyframe-based graph optimization is performed to rectify the drifts of traditional visual localization. The proposed framework was validated using a mobile platform with red-green-blue-depth (RGB-D) and inertial sensors, and the test scene was an indoor office environment with furnishing elements. The object detection model achieved 62.9% mean average precision (mAP). The relocalization technique reduced translational drifts by 64.67% and rotational drifts by 41.59% compared with traditional visual–inertial odometry.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.