Abstract

Abstract. This paper presents a study on the potential of ultra-high accurate UAV-based 3D data capture by combining both imagery and LiDAR data. Our work is motivated by a project aiming at the monitoring of subsidence in an area of mixed use. Thus, it covers built-up regions in a village with a ship lock as the main object of interest as well as regions of agricultural use. In order to monitor potential subsidence in the order of 10 mm/year, we aim at sub-centimeter accuracies of the respective 3D point clouds. We show that hybrid georeferencing helps to increase the accuracy of the adjusted LiDAR point cloud by integrating results from photogrammetric block adjustment to improve the time-dependent trajectory corrections. As our main contribution, we demonstrate that joint orientation of laser scans and images in a hybrid adjustment framework significantly improves the relative and absolute height accuracies. By these means, accuracies corresponding to the GSD of the integrated imagery can be achieved. Image data can also help to enhance the LiDAR point clouds. As an example, integrating results from Multi-View Stereo potentially increases the point density from airborne LiDAR. Furthermore, image texture can support 3D point cloud classification. This semantic segmentation discussed in the final part of the paper is a prerequisite for further enhancement and analysis of the captured point cloud.

Highlights

  • The quality of area-covering 3D point clouds as captured by aerial and mobile mapping platforms still experiences a considerable boost due to the ongoing advancements in LiDAR technology and Multi-View-Stereo-Matching (MVS)

  • One main advantage of MVS is that the resulting geometric accuracy directly corresponds to the Ground Sampling Distance (GSD) and the scale of the evaluated imagery

  • Full use of the geometric information provided from these data sources requires a semantic analysis of the respective point clouds

Read more

Summary

INTRODUCTION

The quality of area-covering 3D point clouds as captured by aerial and mobile mapping platforms still experiences a considerable boost due to the ongoing advancements in LiDAR technology and Multi-View-Stereo-Matching (MVS). For area-covering monitoring of such changes, 3D point clouds at mm-accuracy have to be provided twice a year Up to now, such accuracy demands presume terrestrial data collection using geodetic instruments, such as level instruments, total stations and differential GNSS. Photogrammetric data collection at mm-scale requires image acquisition at a similar resolution, which typically presumes the use of UAVs. If (signalized) ground control points are available with sufficient accuracy and distribution, in principle integrated georeferencing and subsequent dense image matching can provide 3D point clouds in the accuracy of some millimeters. We apply a hybrid orientation of airborne LiDAR point clouds and aerial images as proposed by Glira et al (2019) This integration of aerial imagery increases the resulting accuracy of the LiDAR points during georeferencing, it provides a precise co-registration of both data sources. As demonstrated by Cramer et al (2018), providing such quality for a considerable number of points scattered in a larger test area causes great effort

JOINT GEOREFERENCING OF IMAGE AND LIDAR DATA
LiDAR Strip Adjustment
Hybrid Orientation of Airborne LiDAR and Aerial Images
42 PCPs 2 LCPs
Comparison of Elevation Models from Different Epochs
POINT CLOUDS FROM LIDAR AND MULTI-VIEWSTEREO
SEMANTIC SEGMENTATION OF POINT CLOUDS
Findings
CONCLUSION AND FURTHER WORK
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call