Abstract

Multi-sensor data fusion has recently gained a wide attention within the Geomatics research community, as it helps overcome the limitations of a single sensor and enables a complete 3D model for the structure and a better object classification. This study develops a data fusion algorithm, which optimally combines sensor data from a terrestrial and an unmanned aerial system (UAS) to obtain an improved and a complete 3D mapping model of a structure. Terrestrial laser scanner (TLS) data are collected for the exterior of a building along with the DJI Phantom 4 Pro and terrestrial close-range Sony α7R camera images. A number of ground control points and targets are established throughout the scanned building for the photogrammetric process and scans registration. Different point cloud datasets are generated from the TLS, UAS and the terrestrial Sony camera images. The created point clouds from each individual sensor and the fused point clouds are used in different forms, namely the original, denoised and subsampled point clouds. The denoised point cloud dataset is generated through the application of the statistical outlier remover (SOR) filter on the original point clouds. The relative precision of the 3D models is investigated using the multiscale model-to-model cloud comparison (M3C2) method. The TLS-based 3Dmodel is used as a reference. It is found that the precision of the Sony-based 3D model is higher than the other two models for the original and denoised datasets. The fused Sony/UAS-based model provides a complete 3D model with precision higher than the UAS-based model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call