Abstract

Classic Content-Based Image Retrieval (CBIR) takes a single non-annotated query image, and retrieves similar images from an image repository. Such a search must rely upon a holistic (or global) view of the image. Yet often the desired content of an image is not holistic, but is localized. Specifically, we define Localized Content-Based Image Retrieval as a CBIR task where the user is only interested in a portion of the image, and the rest of the image is irrelevant. Many classic CBIR systems use relevance feedback to obtain images labeled as desirable or not desirable. Yet, these labeled images are typically used only to re-weight the features used within a global similarity measure. In this paper we present a localized CBIR system, acciop, that uses labeled images in conjunction with a multiple-instance learning algorithm to first identify the desired object and re-weight the features, and then to rank images in the database using a similarity measure that is based upon individual regions within the image. We evaluate our system using a five-category natural scenes image repository, and benchmark data set, SIVAL, that we have constructed with 25 object categories.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.