Abstract

The analysis of topographic time series is often based on bitemporal change detection and quantification. For 3D point clouds, acquired using laser scanning or photogrammetry, random and systematic noise has to be separated from the signal of surface change by determining the minimum detectable change. To analyse geomorphic change in point cloud data, the multiscale model-to-model cloud comparison (M3C2) approach is commonly applied, which provides a statistical significance test. This test assumes planar surfaces and a uniform registration error. For natural surfaces, the planarity assumption does not necessarily apply, in which cases the value of minimal detectable change (Level of Detection) is overestimated. To overcome these limitations, we quantify an uncertainty information for each 3D point by propagating the uncertainty of the measurements themselves and of the alignment uncertainty to the 3D points. This allows the calculation of 3D covariance information for the point cloud, which we use in an extended statistical test for equality of multivariate means. Our method, called M3C2-EP, gives a less biased estimate of the Level of Detection, allowing a more appropriate significance threshold in typical cases. We verify our method in two simulated scenarios, and apply it to a time series of terrestrial laser scans of a rock glacier at two different timespans of three weeks and one year. Over the three-week period, we detect significant change at 12.5% fewer 3D locations, while quantifying additional 25.2% of change volume, when compared to the reference method of M3C2. Compared with manual assessment, M3C2-EP achieves a specificity of 0.97, where M3C2 reaches 0.86 for the one-year timespan, while sensitivity drops from 0.72 for M3C2 to 0.60 for M3C2-EP. Lower Levels of Detection enable the analysis of high-frequency monitoring data, where usually less change has occurred between successive scans, and where change is small compared to local roughness. Our method further allows the combination of data from multiple scan positions or data sources with different levels of uncertainty. The combination using error propagation ensures that every dataset is used to its full potential.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call