Abstract

Camera calibration is a crucial step for computer vision in many applications. For example, adequate calibration is required in infrared thermography inside gas turbines for blade temperature measurements, for associating each pixel with the corresponding point on the blade 3D model. The blade has to be used as the calibration frame, but it is always only partially visible, and thus, there are few control points. We propose and test a method that exploits the anisotropic uncertainty of the control points and improves the calibration in conditions where the number of control points is limited. Assuming a bivariate Gaussian 2D distribution of the position error of each control point, we set uncertainty areas of control points’ position, which are ellipses (with specific axis lengths and rotations) within which the control points are supposed to be. We use these ellipses to set a weight matrix to be used in a weighted Direct Linear Transformation (wDLT). We present the mathematical formalism for this modified calibration algorithm, and we apply it to calibrate a camera from a picture of a well known object in different situations, comparing its performance to the standard DLT method, showing that the wDLT algorithm provides a more robust and precise solution. We finally discuss the quantitative improvements of the algorithm by varying the modules of random deviations in control points’ positions and with partial occlusion of the object.

Highlights

  • Many computer vision applications, such as robotics, photogrammetry, or augmented reality, require camera calibration algorithms

  • The first test aimed to evaluate the robustness of the three algorithms (DLT, Bouguet’s method (BOU), and weighted Direct Linear Transformation (wDLT)); the reprojection error was calculated with control points perturbed with random errors and with different values of uncertainty for wDLT

  • BOU calibration was performed without distortion estimation, because not all the random configurations of control points led to a solution; this fact was due to the estimation problem, which became ill conditioned when data did not contain enough information

Read more

Summary

Introduction

Many computer vision applications, such as robotics, photogrammetry, or augmented reality, require camera calibration algorithms. The calibration is the estimation of the parameters of the camera model from given photos and videos, acquired with the camera. Camera parameters are both extrinsic and intrinsic: extrinsic parameters are pose dependent, while the intrinsic ones are related to the intrinsic properties of the camera itself. A camera model is a mathematical description of the projection of a 3D point in the real world on the 2D image plane. Control points are usually used for the parameters’ estimation. They are points whose coordinates are known both in the 3D real world and in the 2D image plane. A chessboard pattern is usually used because corners in the chessboard pattern are very easy to identify and its geometry is simple

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.