Abstract

Content-based image retrieval (CBIR) composes of retrieving the most similar images from an online or offline database concerning a query image. In this article, we proposed a CBIR technique which uses a multi-level stacked Autoencoders for feature selection and dimensionality reduction. The present work more focus towards an image retrieval approach on a mobile device such as mobile phones and tablets. The approach uses feature descriptors derived from a stacked autoencoder where both the abstract feature extraction and the dimensionality reduction are handled simultaneously in the autoencoder stages. A query image space is created first before the actual retrieval process by combining the query image as well as similar images from the local image database (images in device gallery) to maintain the image saliency in the visual contents. The features corresponding to the query image space elements are searched against the characteristics of images in a global dataset (target images from cloud space, servers, etc.) in a weighted manner, and the target images are ranked based on the similarity scores. The top ranked images from the global dataset is then selected based on the thresholding of similarity score. The proposed CBIR scheme is experimented over standard public datasets, and performance is compared against state-of-the-art image retrieval approaches. The results show significant improvements in the overall precision, recall and time cost and justifies the image search over mobile devices with less computation capability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call