Abstract

Constructing appropriate descriptors for interest points in image matching is a critical aspect task in computer vision and pattern recognition. A method as an extension of the scale invariant feature transform (SIFT) descriptor called shape–color alliance robust feature (SCARF) descriptor is presented. To address the problem that SIFT is designed mainly for gray images and lack of global information for feature points, the proposed approach improves the SIFT descriptor by means of a concentric-rings model, as well as integrating the color invariant space and shape context with SIFT to construct the SCARF descriptor. The SCARF method developed is more robust than the conventional SIFT with respect to not only the color and photometrical variations but also the measuring similarity as a global variation between two shapes. A comparative evaluation of different descriptors is carried out showing that the SCARF approach provides better results than the other four state-of-the-art related methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call