Abstract

The paper describes a technique that supports efficient and effective Content-Based Image Retrieval (CBIR) in very large image archives as well as automatic image tagging. The proposed technique uses a unified representation for image visual features and for image textual descriptions. Images are clustered according to their image visual features while textual content is used to associate relevant tags to images belonging to the same cluster. The system supports image retrieval based on image query similarity, on textual queries, and on mixed mode queries composed of an image and a textual part and automatic image tagging.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call