Abstract

The content based image retrieval is developed and receives many attention from computer vision, supported by the ubiquity of Internet and digital devices. Bag-of-words method from text-based image retrieval trains images’ local features to build visual vocabulary. These visual words are used to represent local features, then quantized before clustering into number of bags. Here, the scale invariant feature transform descriptor is used as local features of images that will be compared each other to find their similarity. It is robust to clutter and partial visibility compared to global feature. The main goal of this research is to build and use a vocabolary to measure image similarity accross two tiny image datasets. K-means clustering algorithm is used to find the centroid of each cluster at different k values. From experiment results, the bag-of-keypoints method has potential to be implemented in the content based information retrieval.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call