Abstract

This paper presents contextual kernel and spectral methods for learning the semantics of images that allow us to automatically annotate an image with keywords. First, to exploit the context of visual words within images for automatic image annotation, we define a novel spatial string kernel to quantify the similarity between images. Specifically, we represent each image as a 2-D sequence of visual words and measure the similarity between two 2-D sequences using the shared occurrences of s -length 1-D subsequences by decomposing each 2-D sequence into two orthogonal 1-D sequences. Based on our proposed spatial string kernel, we further formulate automatic image annotation as a contextual keyword propagation problem, which can be solved very efficiently by linear programming. Unlike the traditional relevance models that treat each keyword independently, the proposed contextual kernel method for keyword propagation takes into account the semantic context of annotation keywords and propagates multiple keywords simultaneously. Significantly, this type of semantic context can also be incorporated into spectral embedding for refining the annotations of images predicted by keyword propagation. Experiments on three standard image datasets demonstrate that our contextual kernel and spectral methods can achieve significantly better results than the state of the art.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.