Abstract

In this paper, we propose the online extrinsic correction method that effectively optimizes the extrinsic parameters of multi-camera systems used in visual SLAM. In the typical visual SLAM systems that use multi-camera settings, the intrinsic and extrinsic parameters of the cameras are calculated through offline calibration, which is used as the fixed constraints in online execution. However, the camera rig can be physically deformed by shock or vibration, and the deviation from the offline calibration parameters can adversely affect the accuracy of triangulation and pose estimation. Therefore, it is crucial to maintain the accurate calibration of the camera rigs continuously throughout the execution. The previous online calibration methods optimize the extrinsic camera parameters in a full degree of freedom(DoF) by minimizing the reprojection error, but the limited visual information available online may bias the resulting camera poses. From the observation that the cameras are mounted on a physical body and the patterns that the body can be deformed is restricted and not completely free, we propose to model the pattern of physical rig deformation by external forces in advance, and then use the pre-trained low-dimensional deformation model to robustly and accurately estimate the changed camera poses in real-time. The proposed method consists of two steps. First, the physical model of the camera system is constructed in a simulator and the actual deformations by various external disturbances are recorded, and the deformation patterns are modeled by a PCA algorithm to build a low-dimensional model. In online execution, the camera poses are updated by minimizing the reprojection errors of visual features within the pre-trained low-dimensional parameterization, instead of optimizing all camera poses independently. Through the experiments in synthetic environments, the proposed online extrinsic correction method shows that it produces more accurate and robust camera pose estimation results than the existing method even when inaccurate 3D-2D correspondences exist or 2D feature positions are noisy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call