Abstract

Driven by the miniaturization, lightweight of positioning and remote sensing sensors as well as the urgent needs for fusing indoor and outdoor maps for next generation navigation, 3D indoor mapping from mobile scanning is a hot research and application topic. The point clouds with auxiliary data such as colour, infrared images derived from 3D indoor mobile mapping suite can be used in a variety of novel applications, including indoor scene visualization, automated floorplan generation, gaming, reverse engineering, navigation, simulation and etc. State-of-the-art 3D indoor mapping systems equipped with multiple laser scanners product accurate point clouds of building interiors containing billions of points. However, these laser scanner based systems are mostly expensive and not portable. Low cost consumer RGB-D Cameras provides an alternative way to solve the core challenge of indoor mapping that is capturing detailed underlying geometry of the building interiors. Nevertheless, RGB-D Cameras have a very limited field of view resulting in low efficiency in the data collecting stage and incomplete dataset that missing major building structures (e.g. ceilings, walls). Endeavour to collect a complete scene without data blanks using single RGB-D Camera is not technic sound because of the large amount of human labour and position parameters need to be solved. To find an efficient and low cost way to solve the 3D indoor mapping, in this paper, we present an indoor mapping suite prototype that is built upon a novel calibration method which calibrates internal parameters and external parameters of multiple RGB-D Cameras. Three Kinect sensors are mounted on a rig with different view direction to form a large field of view. The calibration procedure is three folds: 1, the internal parameters of the colour and infrared camera inside each Kinect are calibrated using a chess board pattern, respectively; 2, the external parameters between the colour and infrared camera inside each Kinect are calibrated using a chess board pattern; 3, the external parameters between every Kinect are firstly calculated using a pre-set calibration field and further refined by an iterative closet point algorithm. Experiments are carried out to validate the proposed method upon RGB-D datasets collected by the indoor mapping suite prototype. The effectiveness and accuracy of the proposed method is evaluated by comparing the point clouds derived from the prototype with ground truth data collected by commercial terrestrial laser scanner at ultra-high density. The overall analysis of the results shows that the proposed method achieves seamless integration of multiple point clouds form different RGB-D cameras collected at 30 frame per second.

Highlights

  • Driven by the miniaturization, lightweight of positioning and remote sensing sensors as well as the urgent needs for fusing indoor and outdoor maps for generation navigation, 3D indoor mapping from mobile scanning is a hot research and application topic

  • State-of-the-art 3D indoor mapping systems equipped with multiple laser scanners (Trimble, 2016) product accurate point clouds of building interiors containing billions of points

  • Low cost RGB-D Cameras are often not equipped with position and orientation measurement suit, and the visual odometry (Gutierrez-Gomez et al, 2016; Huang A, 2011; Nistér et al, 2006; Whelan et al, 2015) is often used as substitution of active measurement equipment such as IMU

Read more

Summary

INTRODUCTION

Lightweight of positioning and remote sensing sensors as well as the urgent needs for fusing indoor and outdoor maps for generation navigation, 3D indoor mapping from mobile scanning is a hot research and application topic. State-of-the-art 3D indoor mapping systems equipped with multiple laser scanners (Trimble, 2016) product accurate point clouds of building interiors containing billions of points. These laser scanner based systems are mostly expensive and not portable. The FOV of depth camera is smaller than 60 degrees, and the available distance is between 3 to 5 m, which are extremely easy to cause the track failure or match error. Aim to solve the above problem of depth camera and provide an efficient and economical solution for indoor data collection, this paper proposed a novel method using sensor array by combination of multi Kinect sensors, and made a prototype of indoor scanner

Hardware
Calibration of Sensor Array
Intrinsic Calibration
Relative Pose of IR and Color Camera
Depth Correction
Relative Pose of Sensors
Experiment of calibration
Experiment of data acquisition
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.