Camera Calibration Through Geometric Constraints from Rotation and Projection Matrices
The process of camera calibration involves estimating the intrinsic and extrinsic parameters, which are essential for accurately performing tasks such as 3D reconstruction, object tracking and augmented reality. In this work, we propose a novel constraints-based loss for measuring the intrinsic (focal length: (f x , f y ) and principal point: (p x , p y )) and extrinsic (baseline: (b), disparity: (d), translation: (t x , t y , t z ), and rotation specifically pitch: (θ p )) camera parameters. Our novel constraints are based on geometric properties inherent in the camera model, including the anatomy of the projection matrix (vanishing points, image of world origin, axis planes) and the orthonormality of the rotation matrix. Thus we proposed a novel Unsupervised Geometric Constraint Loss (UGCL) via a multitask learning framework. Our methodology is a hybrid approach that employs the learning power of a neural network to estimate the desired parameters along with the underlying mathematical properties inherent in the camera projection matrix. This distinctive approach not only enhances the interpretability of the model but also facilitates a more informed learning process. Additionally, we introduce a new CVGL Camera Calibration dataset, featuring over 900 configurations of camera parameters, incorporating 63, 600 image pairs that closely mirror real-world conditions. By training and testing on both synthetic and real-world datasets, our proposed approach demonstrates improvements across all parameters when compared to the state-of-the-art (SOTA) benchmarks. The code and the updated dataset can be found here: https://github.com/CVLABLUMS/ CVGL-Camera-Calibration.
- Research Article
112
- 10.1109/tpami.2005.40
- Feb 1, 2005
- IEEE Transactions on Pattern Analysis and Machine Intelligence
This paper concerns the incorporation of geometric information in camera calibration and 3D modeling. Using geometric constraints enables more stable results and allows us to perform tasks with fewer images. Our approach is motivated and developed within a framework of semi-automatic 3D modeling, where the user defines geometric primitives and constraints between them. It is based on the observation that constraints, such as coplanarity, parallelism, or orthogonality, are often embedded intuitively in parallelepipeds. Moreover, parallelepipeds are easy to delineate by a user and are well adapted to model the main structure of, e.g., architectural scenes. In this paper, first a duality that exists between the shape parameters of a parallelepiped and the intrinsic parameters of a camera is described. Then, a factorization-based algorithm exploiting this relation is developed. Using images of parallelepipeds, it allows us to simultaneously calibrate cameras, recover shapes of parallelepipeds, and estimate the relative pose of all entities. Besides geometric constraints expressed via parallelepipeds, our approach simultaneously takes into account the usual self-calibration constraints on cameras. The proposed algorithm is completed by a study of the singular cases of the calibration method. A complete method for the reconstruction of scene primitives that are not modeled by parallelepipeds is also briefly described. The proposed methods are validated by various experiments with real and simulated data, for single-view as well as multiview cases.
- Research Article
- 10.3745/kipstb.2003.10b.5.515
- Aug 1, 2003
- The KIPS Transactions:PartB
본 논문에서는 2장의 영상으로부터 카메라 내부 파라미터를 추출하는 교정 방법을 제시한다. 카메라 교정은 2차원 영상으로부터 3차원 정보를 얻기 위해서는 필수 불가결한 기술이다. 기존의 많은 연구들이 수행되어 왔는데, 영상내에 체크 패턴을 포함한 3장의 영상을 이용하는 방법과 연속된 3장의 영상으로부터 Kruppa 방정식을 풀어 카메라 교정하는 방법이 대표적인 예가 되겠다. 본 논문에서는 인간이 만든 조형물에서 쉽게 발견할 수 있는 기하학적인 정보를 이용하여 보다 쉽고 빠르게 내부 파라미터를 추출한다. 이러한 내부 파라미터는 소실점들로부터 추정되며 대응되는 2장의 영상에서 대응점들로부터 외부 파라미터를 추출할 수 있다. 이렇게 교정된 내부, 외부 파라미터를 이용하여 사영 행렬을 유도하고, 유도된 사영행렬로 3차원 정보를 얻게 되고 3차원 재구성을 구현하게 된다. This paper proposes a calibration method from two images. Camera calibration is necessarily required to obtain 3D Information from 2D images. Previous works to accomplish the camera calibration needed the calibration object or required more than three images to calculate the Kruppa equation, however, we use the geometric constraints of parallelism and orthogonality can be easily presented in man-made scenes. The task of it is to obtain intrinsic and extrinsic camera parameters. The intrinsic parameters are evaluated from vanishing points and then the extrinsic parameters which are consisted of rotation matrix and translation vector of the camera are estimated from corresponding points of two views. From the calibrated parameters, we can recover the projection matrices for each view point. These projection matrices are used to recover 3D information of the scene and can be used to visualize new viewpoints.
- Conference Article
3
- 10.1061/9780784413029.089
- Jun 24, 2013
- Computing in Civil Engineering
The accuracy of the results in stereo image-based 3D reconstruction is very sensitive to the intrinsic and extrinsic camera parameters determined during camera calibration. The existing camera calibration algorithms induce a significant amount of error due to poor estimation accuracies in camera parameters when they are used for long-range scenarios such as mapping civil infrastructure. This leads to unusable results, and may result in the failure of the whole reconstruction process. This paper proposes a novel way to address this problem. Instead of incremental improvements to the accuracy typically induced by new calibration algorithms, the authors hypothesize that a set of multiple calibrations created by videotaping a moving calibration pattern along a specific path can increase overall calibration accuracy. This is achieved by using conventional camera calibration algorithms to perform separate estimations for some predefined distance values. The result, which is a set of camera parameters for different distances, is then uniquely input in the Structure from Motion process to improve the Euclidean accuracy of the reconstruction. The proposed method has been tested on infrastructure scenes and the experimental analyses indicate the improved performance.
- Conference Article
17
- 10.1109/robot.2002.1013612
- Aug 7, 2002
In this paper, we propose a method for precise camera calibration which conducts feature point extraction and camera parameter estimation iteratively. Many of conventional researches on camera calibration have focused on how to calculate camera parameters using data obtained from input images, that is, location of feature points. However, these input images suffer from distortions caused by perspective and lens imperfections. In our proposed method, at the beginning of the procedure, projective transformation matrices between image planes and a calibration target and lens distortion parameters are approximately estimated. These parameters are used in order to reduce the influence of distortions in input images. After the removal of distortions, feature points in processed images are localized precisely and used to update/calculate the projective transformation matrices, the lens distortion parameters and intrinsic camera parameters. These procedures are iterated until they converge and this iteration results in precise estimation of the parameters. The effectiveness of the proposed method has been recognized through experiments using synthesized data and real images.
- Research Article
4
- 10.1364/oe.480086
- Jan 3, 2023
- Optics Express
In computer vision, camera calibration is essential for photogrammetric measurement. We propose a new stratified camera calibration method based on geometric constraints. This paper proposes several new theorems in 2D projective transformation: (1) There exists a family of lines whose parallelity remains invariable in a 2D projective transformation. These lines are parallel with the image of the infinity line. (2) There is only one line whose verticality is invariable with the family of parallel lines in a 2D projective transformation, and the principal point lies on this line. With the image of the infinite line and the dual conic of the circular points, the closed-form solution of the line passing through principal point is deduced. The angle among the target board and image plane, which influences camera calibration, is computed. We propose a new geometric interpretation of the target board's pose and solution method. To obtain appropriate poses of the target board for camera calibration, we propose a visual pose guide (VPG) of the target board system that can guide a user to move the target board to obtain appropriate images for calibration. The expected homography is defined, and its solution method is deduced. Experimental results with synthetic and real data verify correctness and validity of the proposed method.
- Conference Article
2
- 10.1109/icip.2015.7351734
- Sep 1, 2015
Non-metric camera calibration is the art of modeling the geometry of the image formation process using only qualitative constraints rather than the metric knowledge of the scene. In this paper we present a new color-coded calibration pattern, specifically designed for non-metric calibration. It embeds two bundles of orthogonal lines in two different color channels granting a two major advantages. Each image containing such a pattern naturally carries all the geometrical constraints needed for the calibration process, while the complexity of the pattern detection is reduced by decoupling the recovery of the two line bundles in two independent processing phases. Furthermore, we present an analysis of the displacement between corresponding lines detected in the color channels and in the grayscale image and propose a simple technique, based on a revised version of the Inverse Compositional Algorithm (ICA), as a solution for this issue. The performance of the proposed pattern was evaluated using the datasets, captured with a wide angle lens camera. The results suggest, that the use of a color-coded calibration pattern not only significantly reduces the amount of required user interaction, but also helps to achieve a high accuracy camera calibration, providing an important contribution to the integration of a fully automatic camera calibration tool.
- Research Article
9
- 10.1080/21642583.2023.2233562
- Jul 15, 2023
- Systems Science & Control Engineering
Camera calibration will directly affect the accuracy and stability of the whole measurement system. According to the characteristics of circular array calibration plate, a camera calibration method based on circular array calibration plate is proposed in this paper. Firstly, subpixel edge detection algorithm is used for image preprocessing. Then, according to cross ratio invariance and geometric constraints, the projection point position of the center point is obtained. Finally, the calibration experiment was carried out. Experimental results show that under any illumination conditions, the average reprojection error of the center coordinates obtained by the improved calibration algorithm is less than 0.12 pixels, which is better than the traditional camera calibration algorithm.
- Conference Article
- 10.1109/icassp43922.2022.9746917
- May 23, 2022
Camera calibration is a necessary prerequisite in many applications of robotics, especially in robot vision in order to obtain metric reconstruction from a 2D image. In this paper, we address the problem of calibrating from a single image of a surface of revolution (SOR) based on deep learning, in order to determine the camera intrinsic parameters. Geometric constraints based on the symmetry properties of the SOR structure are deployed to our proposed learning-based camera calibration framework. To enable the calibration from a single view, we also propose a learning-based conics detection model fitting the geometric primitive of a cylinder. The calibration from a single view can be completed by minimizing the geometric constraints of two conics detected by the learning-based model with cylinder images as input. Objects with a surface of revolution are commonly visible in daily life, such as cans, bottles, and bowls, making this research both significant and practical. Finally, traditional calibration techniques are compared against our single image calibration. Experiments conducted on newly generated dataset demonstrate the effectiveness and robustness of the proposed method.
- Research Article
2
- 10.1016/s0031-3203(01)00176-5
- Jun 19, 2002
- Pattern Recognition
Euclidean reconstruction from contour matches
- Book Chapter
1
- 10.1007/3-540-45411-x_37
- Jan 1, 2001
In this paper we consider the problem of reconstructing architectural scenes from multiple photographs taken from arbitrarily viewpoints. The original contribution of this work is the use of a map as a source of a priori knowledge and geometric constraints in order to obtain in a fast and simple way a detailed model of a scene. We suppose images are uncalibrated and have at least one planar structure as a facade for exploiting the planar homography induced between world plane and image to calculate a first estimation of the projection matrix. Estimations are improved by using correspondences between images and map. We show how these simple constraints can be used to calibrate the cameras, to recover the projection matrices for each viewpoint, and to obtain 3D models by using triangulation.
- Research Article
33
- 10.1016/s0031-3203(02)00288-1
- Jan 15, 2003
- Pattern Recognition
Uncalibrated reconstruction: an adaptation to structured light vision
- Research Article
- 10.21307/ijanmc-2019-036
- Jan 1, 2019
- International Journal of Advanced Network, Monitoring and Controls
The line structure light three-dimensional reconstruction system is a kind of three-dimensional non-contact measurement system, which has the advantages of high precision, high speed, small damage to objects and strong adaptability. Camera calibration is a major factor that constrains the accuracy of 3D measurement systems. The camera calibration is based on the pinhole imaging model, and through a series of complex calculations, the camera’s internal parameters (focal length, distortion coefficient) and external parameters (rotation matrix and translation vector). The different calibration methods use different calibration targets, which can be divided into 3D calibration targets, 2D calibration targets, and one-dimensional calibration targets according to the characteristics of the calibration targets. This paper mainly discusses: calibration content and significance, calibration methods for different targets and evaluation methods for calibration of different targets. Firstly, the content and significance of calibration are expounded. Then, according to different calibration targets, the calibration algorithm is analyzed. Finally, the calibration algorithm is analyzed and summarized, and the development trends, advantages and disadvantages of different calibration methods are pointed out.
- Research Article
722
- 10.1007/bf00127813
- Mar 1, 1990
- International Journal of Computer Vision
In this article a new method for the calibration of a vision system which consists of two (or more) cameras is presented. The proposed method, which uses simple properties of vanishing points, is divided into two steps. In the first step, the intrinsic parameters of each camera, that is, the focal length and the location of the intersection between the optical axis and the image plane, are recovered from a single image of a cube. In the second step, the extrinsic parameters of a pair of cameras, that is, the rotation matrix and the translation vector which describe the rigid motion between the coordinate systems fixed in the two cameras are estimated from an image stereo pair of a suitable planar pattern. Firstly, by matching the corresponding vanishing points in the two images the rotation matrix can be computed, then the translation vector is estimated by means of a simple triangulation. The robustness of the method against noise is discussed, and the conditions for optimal estimation of the rotation matrix are derived. Extensive experimentation shows that the precision that can be achieved with the proposed method is sufficient to efficiently perform machine vision tasks that require camera calibration, like depth from stereo and motion from image sequence.
- Conference Article
40
- 10.1117/12.333798
- Dec 14, 1998
- Proceedings of SPIE, the International Society for Optical Engineering/Proceedings of SPIE
Man-made objects often have a polyhedral shape. For polyhedral objects it is advantageous to use a line-photogrammetric approach, i.e. lines are observed in the images instead of points. A novel line-photogrammetric mathematical model is presented. This model is built from condition equations with image line observations and object parameters in the form of the coordinates of object points and the parameters of object planes. The use of plane parameters significantly simplifies the formulation of geometric constraints. Object line parameters are not included in the model. The duality of the point and plane representation in space is exploited and leads to linear equations for the computation of approximate values. Constraints on the parameters are used to eliminate the rank deficiency and to enforce geometric object constraints. The exterior orientation of the images is assumed to be approximately known. The rotation matrix is parameterized by a unit quaternion. The main advantages of the presented mathematical model are the use of image lines as observations and the way in which it facilitates the incorporation of all types of geometric object constraints. Furthermore, the model is free of singularities through a combination of over- parametrization and constraints. The least squares adjustment allows rigorous assessment of the precision of the computed parameters and allows for statistical testing to detect possible errors in the observations and the constraints. Examples demonstrate the advantages of the proposed mathematical model and show the effects of the introduction of geometric constraints.
- Research Article
176
- 10.1109/tpami.2005.80
- Apr 1, 2005
- IEEE Transactions on Pattern Analysis and Machine Intelligence
We investigate the projective properties of the feature consisting of two concentric circles. We demonstrate there exist geometric and algebraic constraints on its projection. We show how these constraints greatly simplify the recoveries of the affine and Euclidean structures of a 3D plane. As an application, we assess the performances of two camera calibration algorithms.