Abstract

In intelligent vehicles, extrinsic camera calibration is preferable to be conducted on a regular basis to deal with unpredictable mechanical changes or variations on weight load distribution. Specifically, high-precision extrinsic parameters between the camera coordinate and the world coordinate are essential to implement high-level functions in intelligent vehicles such as distance estimation and lane departure warning. However, conventional calibration methods, which solve a Perspective-n-Point problem, require laborious work to measure the positions of 3D points in the world coordinate. To reduce this inconvenience, this paper proposes an automatic camera calibration method based on 3D reconstruction. The main contribution of this paper is a novel reconstruction method to recover 3D points on planes perpendicular to the ground. The proposed method jointly optimizes reprojection errors of image features projected from multiple planar surfaces, and finally, it significantly reduces errors in camera extrinsic parameters. Experiments were conducted in synthetic simulation and real calibration environments to demonstrate the effectiveness of the proposed method.

Highlights

  • Recovering the positions of 3D points from 2D-2D correspondences is a fundamental building block in geometric computer vision

  • The proposed method is composed of constrained multiple planar reconstruction and automatic extrinsic camera calibration

  • We propose a method for automatic camera calibration of intelligent vehicles

Read more

Summary

Introduction

Recovering the positions of 3D points from 2D-2D correspondences is a fundamental building block in geometric computer vision. This is called triangulation, and it is an essential procedure for many applications including structure-from-motion (SfM) [1,2,3], simultaneous localization and mapping (SLAM) [4,5,6], and visual odometry [7,8]. Back-projected rays from an image correspondence intersect at a point in three dimensional space, and it can be formulated by a direct linear transformation. In practice, the rays do not necessarily intersect due to measurement noise involved in image features, and these features do not in general satisfy the epipolar geometry [9]. Recovering 3D information is not a trivial problem even in a two-view case

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call