Abstract

Camera calibration is very important when planning machine vision tasks. Calibration may involve 3D reconstruction, size measurement, or careful target positioning. Calibration accuracy directly affects the accuracy of machine vision. The parameters in many image distortion models are usually applied to all image pixels. However, this may be associated with rather high pixel reprojection errors at image edges, compromising camera calibration accuracy. In this paper, we present a new camera calibration optimization algorithm that features a step function that splits images into center and edge regions. First, based on the increasing pixel reprojection errors according to the pixel distance away from the image center, we gave a flexible method to divide an image into two regions, center and boundary. Then, the algorithm automatically determines the step position, and the calibration model is rebuilt. The new model can calibrate the distortions at the center and boundary regions separately. Optimized by the method, the number of distortion parameters in the old model is doubled, and different parameters represent different distortions within two regions. In this way, our method can optimize traditional calibration models, which define a global model to describe the distortion of the whole image and get a higher calibration accuracy. Experimentally, the method significantly improved pixel reprojection accuracy, particularly at image edges. Simulations revealed that our method was more flexible than traditional methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call