Abstract

The high noise and local deformation of multi-modal images reduce the accuracy of scale invariant feature transform (SIFT) image matching. To solve this problem, a new method based on the SIFT framework, which fuses the phase consistency optimization strategy and the gradient direction of principal component analysis (PCA) with the 8 direction of latitude reduction, is proposed in this paper. This method fuses the histogram of the orientated phase congruency (HOPC) method to extract the direction of the image, and adopts PCA to extract the main direction, which effectively solves the problem that the matching accuracy decreases due to the inversion of the direction of the image. Using the image phase instead of gradient intensity, the difficult problem of direction extraction is effectively solved when image edge characteristics are not obvious. Finally, the random sample consistency (RANSAC) algorithm is used to eliminate false match points. Simulation and experiments show that compared with the SIFT algorithm and PCA-SIFT algorithm, the proposed method improved the number of match points and matching accuracy, significantly reduced the mismatching rate. The statistical results show that the number of match points raised in this paper increases by 20.1% and 200% respectively compared with the former two algorithms.

Highlights

  • Multi-modal image matching has a wide range of applications in the fields of medicine, remote sensing, and navigation [1]–[3]

  • The experiment verifies the algorithm from two types of data: 1) Heterogenous image matching without significant noise; 2) Multi-modal image matching with artificially added non-mean, Gaussian noise with a variance of 0.1

  • Compared with the traditional scale invariant feature transform (SIFT) and principal component analysis (PCA)-SIFT algorithms, the results showed that the HP-SIFT algorithm extracts 10% more points than the SIFT algorithm, 5% more points than the PCA-SIFT algorithm, and the correct matching point pairs increased by 12% and 11% respectively

Read more

Summary

Introduction

Multi-modal image matching has a wide range of applications in the fields of medicine, remote sensing, and navigation [1]–[3]. Due to the significant difference in image contrast, image brightness and image texture of multi-modal images [4]. Multi-modal image matching has always been a major problem and has not been completely solved [5], [6]. Multi-modal image matching can be classified into two types according to methods: template matching and feature matching [7], [8]. Gray or edge information of the entire template area is matched, which mainly includes gray similarity matching, gradient similarity matching and the mutual information correlation method [9], [10]. Due to the different generation mechanisms of different image matching, methods based on template

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call