Abstract

This paper describes the comprehensive analysis of system calibration between an optical camera and a range finder. The results suggest guidelines for accurate and efficient system calibration enabling high-quality data fusion. First, self-calibration procedures were carried out using a testbed designed for both the optical camera and range finder. The interior orientation parameters of the utilized sensors were precisely computed. Afterwards, 92 system calibration experiments were carried out according to different approaches and data configurations. For comparison of the various experimental results, two measures, namely the matching rate of fusion data and the standard deviation of relative orientation parameters derived after system calibration procedures, were considered. Among the 92 experimental cases, the best result (the matching rate of 99.08%) was shown for the use of the one-step system calibration method and six datasets from multiple columns. Also, the root mean square values of the residuals after the self- and system calibrations were less than 0.8 and 0.6 pixels, respectively. In an overall evaluation, it was confirmed that the one-step system calibration method using four or more datasets provided more stable and accurate relative orientation parameters and data fusion results than the other cases.

Highlights

  • In recent years, 3D modeling has been applied in various areas including public safety, virtual environments, fire and police planning, location-based services, environmental monitoring, intelligent transportation, structural health monitoring, underground construction, motion capturing, and so on [1]

  • The analysis of the system calibration and data fusion results were performed in three ways: (i) two relative orientation parameters (ROPs) calculation methods; (ii) different geometrical locations of datasets; (iii) different numbers of datasets

  • This study provided and compared various system calibration results comprehensively accorTdhinisg tsotutdhye cpalriobvriadtieodn aapnpdrocaocmh p(ia.ere.,donvea-rsitoeupsorsytwstoe-mstecpa)liabnrdattihone crheasrualcttsercisotmicps roefhtehnesiuvseeldy dacactaosredtisng(i.teo.,thdeiffcearleibnrtagtieoonmaeptprircoaalclhoc(ia.tei.o, nosnea-nsdtepnuomr tbweor -ostfepd)ataansdetsth).ePcrhearreaqcuteisriitseticasccoufrtahtee useselfdcdaaltiabsreattsio(ni.ep.r, odciefdfeurreenst wgeeorempeterrifcoarlmloedcatbieofnosreansydstneummcbaelriboraftidoant.asAefttse).rwParerdresq, u4i6sicteomacbcinuaratitoensseolff

Read more

Summary

Introduction

3D modeling has been applied in various areas including public safety, virtual environments, fire and police planning, location-based services, environmental monitoring, intelligent transportation, structural health monitoring, underground construction, motion capturing, and so on [1]. The multi-modal system generally consists of optical cameras providing color information and other sensors giving depth information, such as light detection and ranging (LiDAR), laser line scanners, and range finders [3]. In cases of data acquisition in indoor environments, the disadvantages of using the optical-range-finder system can be significantly reduced, since the distance to indoor objects or structures is sufficiently short, and the influences of sunlight are minimized. In this context, this paper will discuss a multi-modal system comprised of an optical camera and a range finder to be utilized in indoor environments.

Related Work
Testbed Designed for Calibration Procedure
Sensors and Mathematical Models
Calibrations and Comparative Evaluation Strategy
System Calibration
Self-Calibration Results
System Calibration Results
Conclusions and Recommendations
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call