Abstract

Fundamental systematic errors in point cloud data are inevitable due to a variety of factors, ranging from the external environment during scanning or observation by a terrestrial laser scanner (TLS), to the assembly of the instrument. For low-cost scanners, error terms may be further accentuated and include, in addition to systematic errors, random or even serious errors that directly affect the coordinates of each point in the point cloud, which are directly related to the quality of the point cloud data and subsequent processing. To address the above issues, we attempted to propose a robust target-based self-calibration method for TLS at the algorithmic level without considering the network design and measurement configuration, and derived its solution by normalizing the residual vector and calculating an equivalent covariance matrix based on the IGGIII function. After validating the simulated and measured data, the experimental results showed that the proposed self-calibration method could effectively eliminate the random and gross errors associated with the observations; improved the accuracy of the points from centimeter to millimeter level; and increased the accuracy of the corrected checkpoints by 58%, 47% and 33% respectively compared to the three existing methods. However, the proposed method was unable to take into account the attenuation of parameter correlations and further refinement in terms of measurement configurations would be subsequently required.

Highlights

  • As an up-and-coming measurement technology, terrestrial laser scanner (TLS) has the benefits of being fast, non-contact and proactive; the point cloud data it obtains allows for the inclusion of geometric and physical information about the targets, and provides high density and accuracy [1]

  • During the scanning or observation of an object, the quality of the point cloud is susceptible to various factors, e.g. the target itself and the external environment, as well as the instrument itself [2,3,4,5], which make the point cloud data often contains a part of systematic errors or even gross errors in addition to the random ones, resulting in the output results are challenging to reflect the real properties of the targets, reducing the observation accuracy and affecting the subsequent point cloud processing to a certain extent

  • We have developed an approach for TLS self-calibration on the basis of the GaussHelmert model and the TLS observation principle, subject to the introduction of random errors for all observations; on the other hand, for the stochastic model, observations were weighted with nominal accuracy deemed to be a priori information and the gross values are posteriori weighted by applying the IGG III function with standardized residuals

Read more

Summary

INTRODUCTION

As an up-and-coming measurement technology, TLS has the benefits of being fast, non-contact and proactive; the point cloud data it obtains allows for the inclusion of geometric and physical information about the targets, and provides high density and accuracy [1]. A comprehensive analysis of self–calibration literature in recent years shows that most of the methods are implemented via the rigid coordinate transformation model, ignoring the random errors of the original observations, and do not consider whether these methods can be well resolved when there are gross errors in coordinate sequences which could lead to the function model to be incorrect, it is possible to obtain a adjusted results To address these considerations, we have developed an approach for TLS self-calibration on the basis of the GaussHelmert model and the TLS observation principle, subject to the introduction of random errors for all observations; on the other hand, for the stochastic model, observations were weighted with nominal accuracy deemed to be a priori information and the gross values are posteriori weighted by applying the IGG III function with standardized residuals.

THEORY OF SELF–CALIBRATION
SELF–CALIBRATION METHOD FOR LOW–COST SCANNERS
Self–calibration Model
Derivation of the Proposed Method
A Q j T 2 j 1 c
Robust Estimation
Derivation of Residual Covariance Matrix
Iterative Calculation Process
EXPERIMENTS AND ANALYSIS
Data Simulation
Findings
CONCLUSIONS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.