Abstract

Currently, image retrieval system are based on low level features of color, texture and shape, not on the semantic descriptions that are common to humans, such as objects, people, and place. In order to narrow down the gap between the low level and semantic level, in this study, we describe an efficient and effective image similarity calculation method for image comparison at object classes. It is not only suitable for images with single objects, but also for images containing multiple and partially occluded objects. In this approach, a machine learning algorithm is used to predict the classes of each of object-contour segments. The similarity measure between two images is been computed using Euclidean distance between images in the k-dimensional space. Experimental results show that this approach is effective, and is invariant to rotation, scaling, and translation of objects.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call