Abstract
The traditional structured light binocular vision measurement system consists of two cameras and a projector, which can be regarded to two monocular vision systems composed by the projector and a camera. In this paper, we present a threedimensional (3D) measurement method based on the combination of binocular vision and monocular vision. The common field of view is reconstructed by a binocular vision system, and the missing data area is filled up by two monocular vision systems. In order to improve the measurement accuracy and unify the three world coordinate systems, a calibration method is proposed. The calibration procedure consists of a binocular vision system calibration, the two monocular vision systems calibration and a globe optimization of the three systems for unifying to a common reference. In monocular vision system calibration, a new method based on virtual target is proposed and used to set up the coordinate relations. We use a projector and two cameras to build a vision system for testing the proposed technique. The experimental results show the calibration algorithm ensures the consistent accuracy in the three systems, which is important for data fusion. And it is clear that the proposed method improves the integrity of measurement results and measuring range efficiently.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.