Abstract

Nowadays many algorithms for mobile robot mapping in indoor environments have been created. In this work we use a Kinect 2.0 camera, a visible range cameras Beward B2720 and an infrared camera Flir Tau 2 for building 3D dense maps of indoor environments. We present the RGB-D Mapping and a new fusion algorithm combining visual features and depth information for matching images, aligning of 3D point clouds, a “loop-closure” detection, pose graph optimization to build global consistent 3D maps. Such 3D maps of environments have various applications in robot navigation, real-time tracking, non-cooperative remote surveillance, face recognition, semantic mapping. The performance and computational complexity of the proposed RGB-D Mapping algorithm in real indoor environments is presented and discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.