Abstract

Camera calibration and 3D reconstruction are two crucial steps in computer vision. With the progress of robot and autonomous rover systems, zooming cameras are widely applied, where online calibration and 3D reconstruction are becoming more and more important. This paper proposed a minimal vision system that consists of a translation platform and an un-calibrated camera mounted on. With this minimal system, we can linearly calibrate the camera online and reconstruct 3D structures from three un-calibrated images by utilizing translation motions. The two images generated by translating the camera to allow the recovery of scene depths. Depths are then analyzed by error analysis models and are utilized to determine the infinite homography between the third image and any of the two translated images. The intrinsic parameters are then calibrated linearly from the computed infinite homography. Camera motion estimation and 3D reconstruction are then readily determined from the intrinsic calibration results. We also proposed a two-step optimization method to refine both the calibration and 3D reconstruction results by minimizing the overall back-projection errors across the three images within a tiny-scale bundle adjustment framework. The proposed method has been validated with both simulation and real image data. The results demonstrate that the proposed minimal linear system can solve the online camera self-calibration problem and the reconstruction of 3D structure. The paper suggests a framework of minimal linear system by using an un-calibrated camera and a low-cost translation platform to address the problems of linear self-calibration of camera, motion estimation, 3D reconstruction, and optimization, in a practical, easy, and accurate way.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call