Abstract

Due to the advancements in digital technologies and social networking, image collections are growing exponentially. The important aim in content-based image retrieval (CBIR) is to reduce the semantic gap issue that improves the performance of image retrieval. In this paper, the objective is achieved by introducing effective visual words fusion technique based on speeded-up robust features (SURF) and histograms of oriented gradients (HOG) feature descriptors. HOG is used to extract global features, whereas SURF is used for the extraction of local features. Global features are preferred for large-scale image retrieval, whereas local features perform better on those systems that support semantic queries with close visual appearance. Moreover, SURF is scale and rotation-invariant as compared to HOG descriptor and it works better for low illumination. On the contrary, HOG performs better for scene-recognition- or activity-recognition-based applications. In the proposed technique, visual words fusion of SURF and HOG feature descriptors is carried which performed better than features fusion of SURF and HOG feature descriptors as well as state-of-the-art CBIR techniques. The proposed technique based on visual words fusion gives classification accuracy of 98.40% using support vector machine while image retrieval accuracy of 80.61%. Qualitative and quantitative analyses performed on four standard image collections namely, Corel-1000, Corel-1500, Corel-5000, and Caltech-256 demonstrate the effectiveness of the proposed technique based on visual words fusion of SURF and HOG feature descriptors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call