Abstract
Visual information plays a crucial role in various domains, from medical diagnosis, journalism, crime-prevention to surveillance. Whereas domain specific images carry specific semantics, the problem of interpreting visual information becomes more complex when we talk of natural images. The maxim, 'A picture is worth a thousand words' explains this inherent problem very concisely. Indexing large databases of images for efficient retrieval is crucial for various domains such as journalism, biomedicine, forensics etc. Manual indexing of images in such large databases can be highly subjective and time consuming. In contrast, content-based image retrieval (CBIR) focuses on the development of efficient retrieval mechanisms based on image features or meta-data used for image annotation. Conventional approaches to CBIR represent images in the form of image-based features. These features vary from global image descriptors such as color or intensity histogram to local ones such as shape and texture. These features along with their combinations have been used previously for CBIR. For example, in (Deng et al., 2001), a region-based colordescriptor, modelling the color values along with their percentages in the region, is proposed. Similarly in (Hadjidemetriou et al., 2004), multi-resolution histograms have been employed for the retrieval of textured images. In (Jeong et al., 2004), the extraction of color histograms through Gaussian mixture vector quantization has been proposed. In (Belongie et al., 2002) and (Petrakis et al., 2002) respectively, shape descriptors and shape matching algorithms have been proposed for image retrieval. The use of low-level image features such as color histograms, shape, and texture attributes introduces a semantic gap (Chen et al., 2004). This semantic gap arises due to the inability of such low-level features to describe the objects and their inter-relations within the image. The use of such low-level features places the responsibility of achieving semantically coherent results on the user-interface. Various techniques of relevance feedback (Rui et al., 1998) have
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.