Abstract

Illumination-invariant method for computing local feature points and descriptors, referred to as LUminance Invariant Feature Transform (LUIFT), is proposed. The method helps us to extract the most significant local features in images degraded by nonuniform illumination, geometric distortions, and heavy scene noise. The proposed method utilizes image phase information rather than intensity variations, as most of the state-of-the-art descriptors. Thus, the proposed method is robust to nonuniform illuminations and noise degradations. In this work, we first use the monogenic scale-space framework to compute the local phase, orientation, energy, and phase congruency from the image at different scales. Then, a modified Harris corner detector is applied to compute the feature points of the image using the monogenic signal components. The final descriptor is created from the histograms of oriented gradients of phase congruency. Computer simulation results show that the proposed method yields a superior feature detection and matching performance under illumination change, noise degradation, and slight geometric distortions comparing with that of the state-of-the-art descriptors.

Highlights

  • Feature detection and description are low-level tasks used in many computer vision and pattern recognition applications such as image classification and retrieval [1, 2], optical flow estimation [3], tracking [4], biometric systems [5], image registration [6], and 3D reconstruction [7].The local feature detection task consists of finding “feature points” in the image

  • The LUminance Invariant Feature Transform (LUIFT) method helps us to extract the most significant local features in images degraded by nonuniform illumination, geometric distortions, and heavy scene noise

  • The performance of the proposed LUIFT method is compared with FAST [26], STAR[37], SIFT [9], SURF [11], KAZE [12], HARRISZ[33], DAISY [10], and LIOP [21] detectors and descriptors

Read more

Summary

Introduction

The local feature detection task consists of finding “feature points” (points, lines, blobs, etc.) in the image. It is desirable that the behavior of feature descriptors be invariant to viewpoint changes, blur effect, and affine transformations [9,10,11,12,13]; and, it needs to be robust to noise and nonuniform illumination degradations. These last two conditions have not been completely solved, even when they are common issues in real-world applications. The nonuniform illumination variations and noise degradations are still challenges that decrease the performance of the existing state-of-the-art methods

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.