Abstract

Image retrieval is stimulating activities and new approach is essential to improve the inference of semantic relationship of image annotation for low-level feature, enhance semantic retrieval and bridging the Semantic Gap. Current research on automatic image annotation pays little attention to the spatial relationships between objects in images; thus fail to provide and extract the relation information. Spatial relationships could enrich the semantic description of images and enhances the power and precision of queries in retrieval. This paper discusses work done in order to identify and select a set of specific spatial relationships terms to be used for further experiments in automate the spatial extractions in images. The spatial relationships are considered fuzzy and usually depend on human interpretation. A preliminary study: an online Image Description Survey is developed consisting ten Corel Dataset images to discover and perceive human aspects for describing images using spatial terms. The survey is implemented using PHP and published online for public to respond. Analysis of the result found that there are 45 spatial terms used by the users in describing the relations of objects in the images, and 28 terms occurred more than once. Further analysis and discussion focus on these 28 terms. The most commonly used is spatial term above for relative relationships with frequency occurrence of 120, followed by spatial term bottom for absolute relationships with frequency occurrence of 57. The terms frequency of each spatial relationships is also discussed in a form of correlations matrix for detecting significant relations of spatial terms used between images. There are a number of different ways how each of the users used spatial terminology in describing an image and analyzing these responses is quite challenging. The study show a set of frequently used spatial terms. These terms and their reciprocal shall be considered for next stage of algorithms development in computing and extracting the spatial terms automatically from images to enhance the capability of the retrieval system as well as for meeting the needs and requirements of users in the future.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.