Abstract

Content-based Image retrieval systems extract and retrieve images using their low-level features, such as color, texture, and shape. Nevertheless, these visual contents do not allow a user to formulate semantically meaningful image query. Image annotation systems are a solution to solve the inadequacy of CBIR systems and allow text based image retrieval. There have been several studies on automatic image annotation utilizing machine learning techniques and images' representation with low level features extracted using either global or local methods. However, typically, these approaches suffer from low correlation between the globally assigned annotations and the visual features used to obtain annotations automatically. In this paper, we present an approach to enhance the effectiveness of CBIR using learning based automatic images annotation based on bag of visual word images representation that is created automatically using a set of manually annotated training images. The experimentation is performed with 4,000 annotated images for training, 1000 images for testing from ImageNet. The result has shown 77.5% of performance accuracy. The result of this work is believed to be one step towards enhancing the performance and effectiveness of existing CBIR and minimizing the semantic gap.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call