Abstract

Applications based on synergistic integration of optical imagery and LiDAR data are receiving a growing interest from the remote sensing community. However, a misaligned integration between these datasets may fail to fully profit the potential of both sensors. In this regard, an optimum fusion of optical imagery and LiDAR data requires an accurate registration. This is a complex problem since a versatile solution is still missing, especially when considering the context where data are collected at different times, from different platforms, under different acquisition configurations. This paper presents a coarse-to-fine registration method of aerial/satellite optical imagery with airborne LiDAR data acquired in such context. Firstly, a coarse registration involves extracting and matching of buildings from LiDAR data and optical imagery. Then, a Mutual Information-based fine registration is carried out. It involves a super-resolution approach applied to LiDAR data, and a local approach of transformation model estimation. The proposed method succeeds at overcoming the challenges associated with the aforementioned difficult context. Considering the experimented airborne LiDAR (2011) and orthorectified aerial imagery (2016) datasets, their spatial shift is reduced by 48.15% after the proposed coarse registration. Moreover, the incompatibility of size and spatial resolution is addressed by the mentioned super-resolution. Finally, a high accuracy of dataset alignment is also achieved, highlighted by a 40-cm error based on a check-point assessment and a 64-cm error based on a check-pair-line assessment. These promising results enable further research for a complete versatile fusion methodology between airborne LiDAR and optical imagery data in this challenging context.

Highlights

  • T HE PERCEPTION of an environment on the Earth’s surface and follow-up exploitations require using multipleManuscript received January 28, 2020; revised March 4, 2020; accepted April 5, 2020

  • It should be noted that the proposed method does not aim to address every scene possible, as we focus on a registration on urban scenes

  • It is dedicated to overcome the challenges associated with the difficult context, where the two datasets are not acquired from the same platform, neither from the same point of view nor having the same spatial resolution and level of detail

Read more

Summary

Introduction

T HE PERCEPTION of an environment on the Earth’s surface and follow-up exploitations require using multipleManuscript received January 28, 2020; revised March 4, 2020; accepted April 5, 2020. In many areas of remote sensing, observations from heterogeneous sources are coupled and jointly analyzed to achieve a richer description of a scene. This approach allows to mutually benefit from their strengths, as well as reducing the data uncertainty and incompleteness relating to each sensor [2]–[4]. As a matter of fact, the fusion of multisource data has become one of the mainstream research topics in the remote sensing community nowadays [1], [5]

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call