Abstract

Abstract. Accurate geometric registration of images and pointclouds is the key step of many 3D-reconstruction or 3D-sensing tasks. In this paper, a novel L-junction based approach is proposed for semi-automatic accurate registration of aerial images and the airborne laser scanning (ALS) point-cloud in urban areas. The approach achieves accurate registration by associating the LiDAR points with the local planes extracted via L-junction detection and matching from multi-view aerial images. An L-junction is an intersection of two line-segments. Through the forward intersection of multi-view corresponding L-junctions, an accurate local junction-plane can be obtained. In the proposed approach, L-junction is manually collected from one view on the flat object-surfaces like walls, roads, and roofs and then automatically matched to other views with the aid of epipolar-geometry and vanishing-point constraints. Then, a plane-constrained bundle block adjustment of the image-orientation parameters is conducted, where the LiDAR points are treated as reference data. The proposed approach was tested with two datasets collected in Guangzhou city and Ningbo city of China. The experimental results showed that the proposed approach had better accuracy than the closest-point based method. The horizontal/vertical registration RMS of the proposed approach reached 4.21cm/5.72cm in Guangzhou dataset and 4.46cm/4.34cm in Ningbo dataset, which was much less than the average LiDAR-point distance (over 25cm in both datasets) and was very close to the image GSDs (3.2cm in Guangzhou and 4.8cm in Ningbo) and the a-priori ranging accuracy of the ALS equipment (about 3cm).

Highlights

  • The recent two decades have witnesses the rapid development of the digital cameras and the Light Detection and Ranging (LiDAR) sensors

  • In (Wong and Orchard, 2008), the intensity of the LiDAR reflections was projected onto the image-space to form the intensity map, and the registration was achieved by extracting point-matches between the intensity map and the aerial images

  • In (Barsai, et al, 2017), the edges of the buildings were extracted from both the LiDAR point-clouds and the aerial images, the registration was achieved by minimizing the overall distances of the two sets of 2D-edges without establishing explicit correspondences

Read more

Summary

Introduction

The recent two decades have witnesses the rapid development of the digital cameras and the Light Detection and Ranging (LiDAR) sensors. In the methods based on multi-modal image-matching, the 3D information in LiDAR data should be projected onto the 2D image-space at the beginning. In (Wong and Orchard, 2008), the intensity of the LiDAR reflections was projected onto the image-space to form the intensity map, and the registration was achieved by extracting point-matches between the intensity map and the aerial images. In (Shorter and Kasparis, 2008), the edges of the buildings were extracted from the LiDAR pointclouds and projected onto the image-space to be matched with the aerial images through phase correlation. In (Barsai, et al, 2017), the edges of the buildings were extracted from both the LiDAR point-clouds and the aerial images, the registration was achieved by minimizing the overall distances of the two sets of 2D-edges without establishing explicit correspondences. The above-mentioned methods transform the 3D alignment problem into a 2D multi-modal image-matching problem or a

Methods
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call