Abstract

This paper presents a novel feature point descriptor for the multispectral image caseFar-Infrared and Visible Spectrum images. It allows matching interest points on images of the same scene but acquired in different spectral bands. Initially, points of interest are detected on both images through a SIFT-like based scale space representation. Then, these points are characterized using an Edge Oriented Histogram (EOH) descriptor. Finally, points of interest from multispectral images are matched by finding nearest couples using the information from the descriptor. The provided experimental results and comparisons with similar methods show both the validity of the proposed approach as well as the improvements it offers with respect to the current state-of-the-art.

Highlights

  • The analysis of multispectral or multiband imaging has recently attracted the attention of the research community for applications in the areas of image and video processing (e.g., [1–4])

  • Detected feature points are described through the use of an Edge-Oriented-Histogram, which is the main contribution of current work and will be referred to as Edge Oriented Histogram (EOH)

  • The proposed approach has been evaluated with a data set containing 100 pairs of VS-Long-Wave Infrared (LWIR)

Read more

Summary

Introduction

The analysis of multispectral or multiband imaging has recently attracted the attention of the research community for applications in the areas of image and video processing (e.g., [1–4]). Applications that combine feature points from different spectral band images are being developed These works are mainly based on the use of classical SIFT algorithm, or minor modifications to the classical approach. Even though the percentage of correct matching can be improved by introducing modifications at the matching stage, results remain very poor when SIFT, or modifications of it (e.g., [11,16]), are used as descriptors in the multispectral case (VS-LWIR), as will be presented in the Experimental Results section This low correspondence rate is mainly due to the lack of descriptive capability of gradient in LWIR images, which in general appears smoother with loss of detail and texture. Even in the cases where the detected feature points correspond to the same position in both images, the matching results remain quite poor due to the differences in their gradient orientation, which is used as a descriptor by SIFT. Since the descriptors are by nature different it is not correct to try to use them for finding similarities and matches

Proposed Approach
Feature Point Detection
Feature Point Description
Feature Point Matching
Experimental Results
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.