Abstract

Omnidirectional three-dimensional (3D) measurement based on the binocular omnidirectional vision system attracts increasing attention in computer vision. A 360° horizontal field of view (FOV) and 180° vertical FOV are obtained by the binocular omnidirectional vision system with only a pair of images. However, a nonlinear relationship exists between points on image planes and in 3D space for the binocular omnidirectional vision system, and traditional calibration methods are no longer applicable. This study proposes a hybrid calibration approach, which effectively fuses the unified camera model and the backpropagation neural network with a virtual 3D target, in which the neural network improved with genetic algorithm compensates all kinds of distortions and errors. First, the unified camera model is used to calibrate the intrinsic parameters, extrinsic parameters, and a transformation matrix. Second, the pixel coordinates on image planes are converted to 3D coordinates by 3D reconstruction through binocular vision. The errors between actual and calculated 3D coordinates are obtained. In addition, the neural network improved with genetic algorithm is established and trained with pixel coordinates on image planes and errors. Experimental results indicate that the average measurement errors in the X and Y directions of the central area are respectively reduced to 0.1057 and 0.1548 mm, and those of the edge area are reduced to 0.1992 and 0.2748 mm. Compared with mathematical and neural network methods, the hybrid calibration method provides more stable, robust, and accurate results. It is proven suitable for complex calibration scenes with large FOV, high distortion, and strong nonlinearity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call