Abstract

Finding corresponding image features between two images is often the first step for many computer vision algorithms. This paper introduces an improved synthetic basis feature descriptor algorithm that describes and compares image features in an efficient and discrete manner with rotation and scale invariance. It works by performing a number of similarity tests between the feature region surrounding the feature point and a predetermined number of synthetic basis images to generate a feature descriptor that uniquely describes the feature region. Features in two images are matched by comparing their descriptors. By only storing the similarity of the feature region to each synthetic basis image, the overall storage size is greatly reduced. In short, this new binary feature descriptor is designed to provide high feature matching accuracy with computational simplicity, relatively low resource usage, and a hardware friendly design for real-time vision applications. Experimental results show that our algorithm produces higher precision rates and larger number of correct matches than the original version and other mainstream algorithms and is a good alternative for common computer vision applications. Two applications that often have to cope with scaling and rotation variations are included in this work to demonstrate its performance.

Highlights

  • Finding corresponding image features between two images is a key step in many computer vision applications, such as image retrieval, image classification, object detection, visual odometry, object tracking, and image stitching [1]

  • We develop a new version of SYnthetic BAsis (SYBA) and call it robust Synthetic Basis Feature

  • We reported the comparisons between SYBA and other well-known algorithms such as Scale-Invariant Feature Transform (SIFT), Speeded-Up Robust Features (SURF), Binary Robust Independent Elementary Features (BRIEF), and rBRIEF to prove the suitability of SYBA for hardware implementation and for real-time embedded applications

Read more

Summary

Introduction

Finding corresponding image features between two images is a key step in many computer vision applications, such as image retrieval, image classification, object detection, visual odometry, object tracking, and image stitching [1]. Since these applications usually require the processing of numerous data points or to run on devices with limited computational resources, feature descriptor is employed to represent specific meaningful structure in the image for fast computation and memory efficiency. Feature points from two images must be detected first and described uniquely before they can be matched. Because images are captured at different times and even from different perspectives, a good feature description algorithm must uniquely describe the feature region and should be robust against scaling, rotation, occlusion, blurring, illumination, and perspective variations between images [2].

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.