Abstract

The developing time-of-flight (TOF) camera is an attractive device for the robot vision system to capture real-time three-dimensional (3D) images, but the sensor suffers from the limit of low resolution and precision of images. This article proposes an approach to automatic generation of an imaging model in the 3D space for error correction. Through observation data, an initial coarse model of the depth image can be obtained for each TOF camera. Then, its accuracy is improved by an optimization method. Experiments are carried out using three TOF cameras. Results show that the accuracy is dramatically improved by the spatial correction model.

Highlights

  • The three-dimensional (3D) vision is vital for a robot system working in uncertain environments

  • We address how to determine the error model of the TOF camera and how to optimize the depth image based on multicamera system in the third section

  • In the second experiment scene, the radius of the sphere is 0.1 m, and TOF camera is set at 1.5 m from the center of the sphere, the depth correction method can be performed with the error model calculated from the first scene directly

Read more

Summary

Introduction

The three-dimensional (3D) vision is vital for a robot system working in uncertain environments. Kahlmann et al propose a depth correction method based on SR-2 TOF camera,[18] which applies a calibration board with different reflectivity to the center pixel of the camera, determines the error using highprecision track line, and the error can be corrected by linear interpolation considering the exposure time. We calibrate three cameras in pairs.[31] Based on the transform matrix calculated from calibration, the corresponding pixels of point A on left and right camera projection plane can be determined, which are A21⁄2u2; v2Š and A31⁄2u3; v3Š. In the second experiment scene, the radius of the sphere is 0.1 m, and TOF camera is set at 1.5 m from the center of the sphere, the depth correction method can be performed with the error model calculated from the first scene directly. The point cloud directly acquired by the TOF camera contains jump edge points, and we use line-of-sight method[8] to eliminate them before the depth error correction

Experiments and results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.