Abstract

Compared with traditional manned airborne photogrammetry, unmanned aerial vehicle remote sensing (UAVRS) has the advantages of lower cost and higher flexibility in data acquisition. It has, therefore, found various applications in fields such as three-dimensional (3D) mapping, emergency management, and so on. However, due to the instability of the UAVRS platforms and the low accuracy of the onboard exterior orientation (EO) observations, the use of direct georeferencing image data leads to large location errors. Light detection and ranging (LiDAR) data, which is highly accurate 3D information, is treated as a complementary data source to the optical images. This paper presents a semi-automatic approach for the registration of UAVRS images and airborne LiDAR data based on linear control features. The presented approach consists of three main components, as follows. (1) Buildings are first separated from the point cloud by the integrated use of height and size filtering and RANdom SAmple Consensus (RANSAC) plane fitting, and the 3D line segments of the building ridges and boundaries are semi-automatically extracted through plane intersection and boundary regularization with manual selections; (2) the 3D line segments are projected to the image space using the initial EO parameters to obtain the approximate locations, and all the corresponding 2D line segments are semi-automatically extracted from the UAVRS images. Meanwhile, the tie points of the UAVRS images are generated using a Förstner operator and least-squares image matching; and (3) by use of the equations derived from the coplanarity constraints of the linear control features and the colinear constraints of the tie points, block bundle adjustment is carried out to update the EO parameters of the UAVRS images in the coordinate framework of the LiDAR data, achieving the co-registration of the two datasets. Experiments were performed to demonstrate the validity and effectiveness of the presented method, and a comparison with the traditional registration method based on LiDAR intensity images showed that the presented method is more accurate, and a sub-pixel accuracy level can be achieved.

Highlights

  • Unmanned aerial vehicle remote sensing (UAVRS) platforms are usually equipped with a charge-coupled device (CCD) digital camera for image acquisition, a global positioning systemRemote Sens. 2016, 8, 82; doi:10.3390/rs8020082 www.mdpi.com/journal/remotesensingRemote Sens. 2016, 8, 82(GPS), and an inertial measurement unit (IMU) for observation of the platform position and attitude.Compared with traditional manned airborne remote sensing, the advantages of UAVRS are that it can work in high-risk situations and inaccessible areas without endangering human lives, and it can capture higher-resolution images at a lower altitude

  • The 18 GPS-measured ground points were used to evaluate the absolute positioning accuracy of the UAVRS images, for which there were four scenarios: direct georeferencing, free network adjustment, registration based on the Light detection and ranging (LiDAR) intensity image, and registration based on linear features

  • Unmanned aerial vehicle remote sensing (UAVRS) has found applications in various fields, which can be attributed to its high flexibility in data acquisition and interpretable visual texture with the optical images

Read more

Summary

Introduction

Unmanned aerial vehicle remote sensing (UAVRS) platforms are usually equipped with a charge-coupled device (CCD) digital camera for image acquisition, a global positioning system. An important issue for the integration of LiDAR data and UAVRS optical images is the registration of these two different types of datasets. Linear features have advantages including [40,41]: (1) image space linear features are easier to extract with sub-pixel accuracy across the direction of the edge as they are discontinuous in one direction while point features are discontinuous in all directions; (2) linear features possess higher semantic information and geometric constraint are more likely to exist among linear features than points to reduce the matching ambiguity; and (3) linear features increase the redundancy and improve the robustness and geometric strength of photogrammetric adjustment. In our study, only two points are used to represent a linear feature in image space which is interactively extracted using line detection algorithms. Differing from the scenarios in most of the existing studies where only a few optical images were used for the registration with LiDAR data, and each image had adequate independent control features for the registration, in our study, the registration of 109 UAVRS images and airborne LiDAR data using 16 linear control features was investigated, which is expected to enrich the methodology for the registration of UAVRS optical images and airborne LiDAR data

Methodology
Extraction of Building Roof Points
Extraction of 3D Line Segments from Building Roof Points
Extraction of Conjugate 2D Line Segments and Tie Points from UAVRS Images
Coplanarity Constraint of the Linear Control Features
Block Bundle Adjustment
Study Area and Data Used
Linear Control Features and Tie Point Extraction Results
Registration Result
Comparison with Intensity Image Based Registration and Accuracy Evaluation
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.