Abstract

During the past few years, there has been a massive explosion of multimedia content such as un-annotated images on the web. Automatic image annotation is an important task for multimedia retrieval. By automatically allocating semantic concepts to un-annotated images, image retrieval can be performed over annotation concepts. In this work, we address the problem of automatic image annotation, namely automatically describing semantic content of image by concept classifier. Traditional approaches mainly consider the link between image and concept, but ignore the link between annotation concepts. We propose a novel Google Semantic link based image Annotation Model (GSAM), which can leverage the associated concept network (ACN) to enhance automatic semantic annotation performance. When several concepts appear in training set with high co-occurrence frequency, our model utilizes Google semantic link to increase the chances of predicting one concept if there is strong visual evidence for others. Additionally, the fusion between Google concept link and local concept link, and semantic links between single-concepts and multi-concepts are employed to improve annotation performance. In order to investigate the feasibility and effectiveness of our approach, we conduct experiments on Corel and IAPR datasets. The experimental results show that our approach considering semantic link outperforms existing state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.