Abstract

Unmanned aerial vehicles (UAVs) are quickly emerging as a popular platform for 3D reconstruction/modeling in various applications such as precision agriculture, coastal monitoring, and emergency management. For such applications, LiDAR and frame cameras are the two most commonly used sensors for 3D mapping of the object space. For example, point clouds for the area of interest can be directly derived from LiDAR sensors onboard UAVs equipped with integrated global navigation satellite systems and inertial navigation systems (GNSS/INS). Imagery-based mapping, on the other hand, is considered to be a cost-effective and practical option and is often conducted by generating point clouds and orthophotos using structure from motion (SfM) techniques. Mapping with photogrammetric approaches requires accurate camera interior orientation parameters (IOPs), especially when direct georeferencing is utilized. Most state-of-the-art approaches for determining/refining camera IOPs depend on ground control points (GCPs). However, establishing GCPs is expensive and labor-intensive, and more importantly, the distribution and number of GCPs are usually less than optimal to provide adequate control for determining and/or refining camera IOPs. Moreover, consumer-grade cameras with unstable IOPs have been widely used for mapping applications. Therefore, in such scenarios, where frequent camera calibration or IOP refinement is required, GCP-based approaches are impractical. To eliminate the need for GCPs, this study uses LiDAR data as a reference surface to perform in situ refinement of camera IOPs. The proposed refinement strategy is conducted in three main steps. An image-based sparse point cloud is first generated via a GNSS/INS-assisted SfM strategy. Then, LiDAR points corresponding to the resultant image-based sparse point cloud are identified through an iterative plane fitting approach and are referred to as LiDAR control points (LCPs). Finally, IOPs of the utilized camera are refined through a GNSS/INS-assisted bundle adjustment procedure using LCPs. Seven datasets over two study sites with a variety of geomorphic features are used to evaluate the performance of the developed strategy. The results illustrate the ability of the proposed approach to achieve an object space absolute accuracy of 3–5 cm (i.e., 5–10 times the ground sampling distance) at a 41 m flying height.

Highlights

  • Unmanned aerial vehicles (UAVs) equipped with integrated global navigation satellite systems and inertial navigation systems (GNSS/INS) are gaining popularity for many applications due to their capability to carry advanced sensors and collect data with high temporal and spatial resolution

  • The absolute accuracy of the LiDAR- and image-based point clouds generated using original system calibration parameters are first reported to verify the following hypotheses: (i) LiDAR system calibration is stable and derived data are accurate enough to be used as a source of control for in situ Interior orientation parameters (IOPs) refinement, and (ii) the accuracy of the image-based point cloud is negatively affected by inaccurate system calibration parameters due to the instability of the camera IOPs

  • The accuracy of the image- and LiDAR-based point clouds derived using the original system calibration parameters is assessed through a comparison with real-time kinematic (RTK)-GNSS measurements of the target centers for the seven datasets

Read more

Summary

Introduction

Unmanned aerial vehicles (UAVs) equipped with integrated global navigation satellite systems and inertial navigation systems (GNSS/INS) are gaining popularity for many applications due to their capability to carry advanced sensors and collect data with high temporal and spatial resolution. Exterior orientation parameters (EOPs), which define the position and orientation of the camera at the moment of exposure in a mapping frame, can be established using either ground control points (GCPs) through a bundle adjustment process, or trajectory information provided by a survey-grade GNSS/INS unit onboard the UAV. The former is known as indirect georeferencing while the latter is referred to as direct georeferencing. A rigorous system calibration is necessary for frame cameras/LiDAR-based MMS using direct georeferencing to ensure high accuracy of the derived 3D point clouds

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.