Abstract
Accurate registration of light detection and ranging (LiDAR) point clouds and images is a prerequisite for integrating the spectral and geometrical information collected by low-cost unmanned aerial vehicle (UAV) systems. Most registration approaches take the directly georeferenced LiDAR point cloud as a rigid body, based on the assumption that the high-precision positioning and orientation system (POS) in the LiDAR system provides sufficient precision, and that the POS errors are negligible. However, due to the large errors of the low-precision POSs commonly used in the low-cost UAV LiDAR systems (ULSs), dramatic deformation may exist in the directly georeferenced ULS point cloud, resulting in non-rigid transformation between the images and the deformed ULS point cloud. As a result, registration may fail when using a rigid transformation between the images and the directly georeferenced LiDAR point clouds. To address this problem, we proposed NRLI-UAV, which is a non-rigid registration method for registration of sequential raw laser scans and images collected by low-cost UAV systems. NRLI-UAV is a two-step registration method that exploits trajectory correction and discrepancy minimization between the depths derived from structure from motion (SfM) and the raw laser scans to achieve LiDAR point cloud quality improvement. Firstly, the coarse registration procedure utilizes global navigation satellite system (GNSS) and inertial measurement unit (IMU)-aided SfM to obtain accurate image orientation and corrects the errors of the low-precision POS. Secondly, the fine registration procedure transforms the original 2D-3D registration to 3D-3D registration. This is performed by setting the oriented images as the reference, and iteratively minimizing the discrepancy between the depth maps derived from SfM and the raw laser scans, resulting in accurate registration between the images and the LiDAR point clouds. In addition, an improved LiDAR point cloud is generated in the mapping frame. Experiments were conducted with data collected by a low-cost UAV system in three challenging scenes to evaluate NRLI-UAV. The final registration errors of the images and the LiDAR point cloud are less than one pixel in image space and less than 0.13 m in object space. The LiDAR point cloud quality was also evaluated by plane fitting, and the results show that the LiDAR point cloud quality is improved by 8.8 times from 0.45 m (root-mean-square error [RMSE] of plane fitting) to 0.05 m (RMSE of plane fitting) using NRLI-UAV, demonstrating a high level of automation, robustness, and accuracy.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: ISPRS Journal of Photogrammetry and Remote Sensing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.