Abstract

Abstract. Automatic matching of multi-modal remote sensing images (e.g., optical, LiDAR, SAR and maps) remains a challenging task in remote sensing image analysis due to significant non-linear radiometric differences between these images. This paper addresses this problem and proposes a novel similarity metric for multi-modal matching using geometric structural properties of images. We first extend the phase congruency model with illumination and contrast invariance, and then use the extended model to build a dense descriptor called the Histogram of Orientated Phase Congruency (HOPC) that captures geometric structure or shape features of images. Finally, HOPC is integrated as the similarity metric to detect tie-points between images by designing a fast template matching scheme. This novel metric aims to represent geometric structural similarities between multi-modal remote sensing datasets and is robust against significant non-linear radiometric changes. HOPC has been evaluated with a variety of multi-modal images including optical, LiDAR, SAR and map data. Experimental results show its superiority to the recent state-of-the-art similarity metrics (e.g., NCC, MI, etc.), and demonstrate its improved matching performance.

Highlights

  • Image matching is a prerequisite step for a variety of remote sensing applications including image fusion, change detection and image mosaic

  • The magnitude and orientation of phase congruency are used to construct Histogram of Orientated Phase Congruency (HOPC), followed by a fast template matching scheme designed for this metric

  • HOPC has been evaluated against ten pairs of multi-modal images, and compared to the state-of-the-art similarity metrics such as normalized cross correlation (NCC), Matching by Tone Mapping (MTM), and mutual information (MI)

Read more

Summary

Introduction

Image matching is a prerequisite step for a variety of remote sensing applications including image fusion, change detection and image mosaic. There has been rapid development of automatic image matching techniques in the last decade, in practice these techniques often require the manual selection of tie-points(or correspondences) for multi-modal remote sensing images, especially for the optical-to-Synthetic Aperture Radar (SAR) or optical-to-Light Detection and Ranging equipment (LiDAR) images. This is because there can be significant geometric distortions and radiometric (intensity) differences between these images. The goal of this paper is to find a robust matching method that is resistant to non-linear radiometric differences between multi-modal remote sensing images

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call