Abstract

ABSTRACT Automatic registration of multi-modal remote sensing data is a challenging task. Due to the significant nonlinear radiation difference and noise between multi-mode images, the register points extracted by traditional gradient algorithms (such as SIFT) have low repetition rate and poor feature similarity, which makes it difficult to register multi-mode images. To solve these problems, this paper proposes a method which is called Multi-modal Image Matching Based on Information Distribution Composite Feature (IDCF). Firstly, an adaptive information entropy map (AIEM) is proposed. AIEM not only describes the information distribution characteristics of the image, but also the distribution of the contour features of the image. Compared with traditional contour feature extraction operators, AIEM extraction results are clearer, more comprehensive and more detailed. In addition, AIEM is more robust to NRD compared to gradient, which is greatly affected by NRD. Then, IDCF extracts corner points on AIEM as feature points because corner points have better repeatability. After that, a composite feature description model based on maximum information index map (MIIM) and information trend graph (ITM) is defined. MIIM describes the main change direction of image information and ITM describes the overall change trend of image information. Finally, the similarity criterion SAD is used to match the feature points. The proposed IDCF aims to capture the structural similarity between images and has been tested with a variety of optical, Lidar, SAR, and map data. The results show that IDCF is robust against complex NRD outperforms the advanced algorithms (i.e. RIFT, PSO-SIFT and OS-SIFT) in matching performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call