Abstract

This paper presents a novel two-step camera calibration method in a GPS/INS/Stereo Camera multi-sensor kinematic positioning and navigation system. A camera auto-calibration is first performed to obtain for lens distortion parameters, up-to-scale baseline length and the relative orientation between the stereo cameras. Then, the system calibration is introduced to recover the camera lever-arms, and the bore-sight angles with respect to the IMU, and the absolute scale of the camera using the GPS/INS solution. The auto-calibration algorithm employs the three-view scale-restraint equations (SRE). In comparison with the collinearity equations (COL), it is free from landmark parameters and ground control points (GCPs). Therefore, the proposed method is computationally more efficient. The results and the comparison between the SRE and COL methods are presented using the simulated and road test data. The results show that the proposed SRE method requires less computation resources and is able to achieve the same or better accuracy level than the traditional COL.

Highlights

  • The high demand for low-cost multi-sensor kinematic positioning and navigation systems as the core of direct-georeferencing technique in mobile mapping is continuously driving more research and development activities

  • Results from the simulated data The simulations were conducted to compare the performance of collinearity equations (COL) and scale-restraint equations (SRE) auto-calibration algorithms based on a typical land vehicle trajectory

  • Results from road test data Test results from both the camera auto-calibration and system calibration using road test data are presented

Read more

Summary

Introduction

The high demand for low-cost multi-sensor kinematic positioning and navigation systems as the core of direct-georeferencing technique in mobile mapping is continuously driving more research and development activities. The effective and sufficient utilization of images is among the most recent scientific research and high-tech industry development subjects. In this particular field, York University’s Earth Observation Laboratory (EOL) is engaging in the study of the image-aided inertial integrated navigation as the natural continuation of its past research in the multi-sensor integrated kinematic positioning and navigation (Qian et al 2012; Wang et al 2015). An image-aided inertial navigation system (IA-INS) implies that the errors of an inertial navigator are estimated via the Kalman filter using measurements derived from images. The process of estimating these parameters is referred to as the camera calibration

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call