Abstract

With the advancement of imaging techniques and IT technologies, image retrieval has become a bottle neck. The key for efficient and effective image retrieval is by a text-based approach in which automatic image annotation is a critical task. As an important issue, the metadata of the annotation, i.e., the basic unit of an image to be labeled, has not been fully studied. A habitual way is to label the segments which are produced by a segmentation algorithm. However, after a segmentation process an object has often been broken into pieces, which not only produces noise for annotation but also increases the complexity of the model. We adopt an attention-driven image interpretation method to extract attentive objects from an over-segmented image and use the attentive objects for annotation. By such doing, the basic unit of annotation has been upgraded from segments to attentive objects. Visual classifiers are trained and a concept association network (CAN) is constructed for object recognition. A CAN consists of a number of concept nodes in which each node is a trained neural network (visual classifier) to recognize a single object. The nodes are connected through their correlation links forming a network. Given that an image contains several unknown attentive objects, all the nodes in CAN generate their own responses which propagate to other nodes through the network simultaneously. For a combination of nodes under investigation, these loopy propagations can be characterized by a linear system. The response of a combination of nodes can be obtained by solving the linear system. Therefore, the annotation problem is converted into finding out the node combination with the maximum response. Annotation experiments show a better accuracy of attentive objects over segments and that the concept association network improves annotation performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.