ABSTRACT Over the past decade, Structure-from-Motion (SfM) and Multi-View Stereo (MVS) techniques have been highly effective in generating high-quality 3D point clouds, especially when integrated with Unmanned Aerial Vehicles (UAVs). However, accurately predicting errors in these point clouds remains challenging. This study introduces a predictive model for quantifying errors in point clouds generated using SfM-MVS, based on 2D image error propagation. We analysed the impact of four key image quality factors – exposure, resolution, blur (including motion and out-of-focus blur), and noise – on 2D image errors. These factors are determined using nine practical parameters belonging to camera settings (ISO, shutter speed, F-number, and pixel resolution), UAV flight methods (speed and distance), camera specifications (sensor size and focal length), and illumination conditions. To account for variations in SfM-MVS output quality due to software and camera model differences, we introduced two experimentally calibratable constants: the maximum template size and the noise sensitivity coefficient. We validated our error prediction model by comparing the predicted errors with observations from a reference indoor target, utilizing a total of 258 image sets acquired with varying parameter combinations. The model demonstrated high predictive accuracy, achieving an R2 value of 0.84. This confirms the effectiveness and feasibility of our model in accurately predicting point cloud errors using the nine parameters.
Read full abstract