Abstract
Camera calibration is a crucial prerequisite for the retrieval of metric information from images. The problem of camera calibration is the computation of camera intrinsic parameters (i.e., coefficients of geometric distortions, principle distance and principle point) and extrinsic parameters (i.e., 3D spatial orientations: ω, ϕ, κ, and 3D spatial translations: tx, ty, tz). The intrinsic camera calibration (i.e., interior orientation) models the imaging system of camera optics, while the extrinsic camera calibration (i.e., exterior orientation) indicates the translation and the orientation of the camera with respect to the global coordinate system. Traditional camera calibration techniques require a predefined mathematical-camera model and they use prior knowledge of many parameters. Definition of a realistic camera model is quite difficult and computation of camera calibration parameters are error-prone. In this paper, a novel implicit camera calibration method based on Radial Basis Functions Neural Networks is proposed. The proposed method requires neither an exactly defined camera model nor any prior knowledge about the imaging-setup or classical camera calibration parameters. The proposed method uses a calibration grid-pattern rotated around a static-fixed axis. The rotations of the calibration grid-pattern have been acquired by using an Xsens MTi-9 inertial sensor and in order to evaluate the success of the proposed method, 3D reconstruction performance of the proposed method has been compared with the performance of a traditional camera calibration method, Modified Direct Linear Transformation (MDLT). Extensive simulation results show that the proposed method achieves a better performance than MDLT aspect of 3D reconstruction.
Highlights
Camera calibration is a major issue in computer vision since it is related to many vision problems such as neurovision, remote sensing, photogrammetry, visual odometry, medical imaging, and shape from motion/silhouette/shading/stereo
The obtained results have been compared with the results obtained from a traditional camera calibration method, Modified Direct Linear Transformation (MDLT)
The main advantages of the proposed method are as follows: It does not require the knowledge of complex mathematical models of view-geometry and an initial estimation of camera calibration, it can be used with various cameras by producing correct outputs, and it can be used in dynamical systems to recognize the position of the camera after training the artificial neural networks (ANNs) structure
Summary
Camera calibration is a major issue in computer vision since it is related to many vision problems such as neurovision, remote sensing, photogrammetry, visual odometry, medical imaging, and shape from motion/silhouette/shading/stereo. The Modified Direct Linear Transformation (MDLT) is one of the commonly used camera calibration methods in computational vision applications for 2D and 3D object reconstruction [24]. ANNs have been used to solve some of the complex problems in the fields of multicamera calibration, modeling of geometric distortions of image-sensors, stereo-vision, image denoising, image enhancement, and image restoration. ANNs are applied to nonlinear problem of multicamera calibration for 3D information extraction from images. A Radial Basis Function Based Artificial Neural Network (RBF) [26] is used to calibrate a multicamera system. In order to use an RBF, the training functions of the hidden-layer and output-layer, the number of neurons in the related layers, and a performance measure for modeling the quality of learning phase must be specified. The main advantages of DE can be summarized as; easy implementation, fast convergence, limited number of control parameters, and finding the global minimum regardless of the high-quality values of initial parameters
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.