Abstract

RGB-Depth (RGB-D) cameras are widely used in computer vision and robotics applications such as 3D modeling and human–computer interaction. To capture 3D information of an object from different viewpoints simultaneously, we need to use multiple RGB-D cameras. To minimize costs, the cameras are often sparsely distributed without shared scene features. Due to the advantage of being visible from different viewpoints, spherical objects have been used for extrinsic calibration of widely-separated cameras. Assuming that the projected shape of the spherical object is circular, this paper presents a multi-cue-based method for detecting circular regions in a single color image. Experimental comparisons with existing methods show that our proposed method accurately detects spherical objects with cluttered backgrounds under different illumination conditions. The circle detection method is then applied to extrinsic calibration of multiple RGB-D cameras, for which we propose to use robust cost functions to reduce errors due to misdetected sphere centers. Through experiments, we show that the proposed method provides accurate calibration results in the presence of outliers and performs better than a least-squares-based method.

Highlights

  • An RGB-D camera is a tightly-coupled pair of one depth camera and one color camera

  • Assuming that the projected shape of the spherical object is circular, we propose a circle detection method based on region and edge cues

  • This paper focuses on the extrinsic calibration between different RGB-D cameras, assuming that the individual RGB-D cameras have been fully calibrated

Read more

Summary

Introduction

Because of the benefits of providing color and depth information in real time, RGB-D cameras have been widely used in many computer vision and robotics tasks such as human or hand pose estimation [1,2], dense. A single RGB-D camera can capture full 3D information of a static object or environment. We can move the camera to capture multiple color and depth image pairs from different viewpoints. The pieces of 3D information of the individual depth images are fused together by using the iterative closest point algorithm [3] or by matching features across images [4] to produce a dense 3D model of the object or the environment. Once the RGB-D camera is fully calibrated, the acquired 3D points can be mapped to their corresponding pixels in the color images to enable texture mapping of the reconstructed 3D model.

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.