Abstract

With the increasing popularity of RGB-depth (RGB-D) sensor, research on the use of RGB-D sensors to reconstruct three-dimensional (3D) indoor scenes has gained more and more attention. In this paper, an automatic point cloud registration algorithm is proposed to efficiently handle the task of 3D indoor scene reconstruction using pan-tilt platforms on a fixed position. The proposed algorithm aims to align multiple point clouds using extrinsic parameters of the RGB-D camera obtained from every preset pan-tilt control point. A computationally efficient global registration method is proposed based on transformation matrices formed by the offline calibrated extrinsic parameters. Then, a local registration method, which is an optional operation in the proposed algorithm, is employed to refine the preliminary alignment result. Experimental results validate the quality and computational efficiency of the proposed point cloud alignment algorithm by comparing it with two state-of-the-art methods.

Highlights

  • Three-dimensional (3D) scene reconstruction is an important issue for several applications of robotic vision such as map construction [1], environment recognition [2], augmented reality [3,4], and simultaneous localization and mapping (SLAM) [5,6]

  • To improve the processing speed of point cloud alignment, this paper presents a novel calibration-based method which employs camera calibration techniques to find out the transformation matrices between some prefixed motor control points in the offline state, and uses these transformation matrices directly in the online state

  • Because the proposed algorithm is based on the combination of pan-tilt camera control and coordinate transformation to perform point cloud alignment, we did not use the existing public database in the experiment due to the requirement of offline calibration

Read more

Summary

Introduction

Three-dimensional (3D) scene reconstruction is an important issue for several applications of robotic vision such as map construction [1], environment recognition [2], augmented reality [3,4], and simultaneous localization and mapping (SLAM) [5,6]. 3D scene reconstruction usually requires numerous types of sensors such as stereo cameras, RGB-depth cameras, time-of-flight (TOF). We discuss how to employ RGB-D camera data for 3D scene reconstruction applications. The 3D colored point cloud information is obtained by using the RGB-D feature of combining two-dimensional (2D) RGB and depth images. Each frame of the RGB-D point cloud information has an independent coordinate system, which can be defined as the camera coordinate system Ci at time i. We expect to map each coordinate system Ci to the same world coordinate system W, which serves as the same mapping target for each coordinate system Ci. in addition to defining a world coordinate system W, transformation matrices that transform each coordinate system Ci to the world coordinate W are required. Some of the existing fine registration methods are employed to conduct the optimal 3D scene reconstruction

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.