Abstract

To support more effective searches in large-scale weakly-tagged image collections, we have developed a novel algorithm to integrate both the visual similarity contexts between the images and the semantic similarity contexts between their tags for topic network generation and word sense disambiguation. First, a topic network is generated to characterize both the semantic similarity contexts and the visual similarity contexts between the image topics more sufficiently. By organizing large numbers of image topics according to their cross-modal inter-topic similarity contexts, our topic network can make the semantics behind the tag space more explicit, so that users can gain deep insights rapidly and formulate their queries more precisely. Second, our word sense disambiguation algorithm can integrate the topic network to exploit both the visual similarity contexts between the images and the semantic similarity contexts between their tags for addressing the issues of polysemes and synonyms more effectively, thus it can significantly improve the precision and recall rates for image retrieval. Our experiments on large-scale Flickr and LabelMe image collections have provided very positive results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.