Abstract

Purpose: The problem of automatic image annotation is not trivial. The training images often contain unbalanced and incomplete annotations, leading to a semantic gap between the visual features and textual description of an image. The existing methods include computationally complex algorithms which optimize the visual features and annotate a new image using all the training images and keywords, potentially reducing the accuracy. A compact visual descriptor should be developed, along with a method for choosing a group of the most informative training images for each test image. Results: A methodology for automatic image annotation is formulated, based on searching for a posteriori probability keyword association with a visual image descriptor. Six global descriptors combined in a single descriptor were obtained. The size of this single descriptor was reduced down to several hundred elements using principal component analysis. The experimental results showed an improvement of the annotation precision by 7% and a recall by 1%. Practical relevance: The compact handle visual method and automatic annotation of images based on the formation of homogeneous textual-visual groups can be used in Internet retrieval systems to improve the image search quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call