Abstract

The integration of visual and semantic information has been found to play a role in increasing the accuracy of social image clustering methods. However, existing approaches are limited by the heterogeneity gap between the visual and semantic modalities, and their performances significantly degrade due to the commonly sparse and incomplete tags in semantic modality. To address these problems, we propose a novel clustering framework to discover reasonable categories in unlabeled social images under the guidance of human explanations. First of all, a novel Explanation Generation Model (EGM) is proposed to automatically boost textual information for the sparse and incomplete tags based on an extra lexical database with human knowledge. Then, a novel clustering algorithm called Group Constrained Information Maximization (GCIM) is proposed to learn image categories. In this algorithm, a new type of constraint named group level side information is unprecedentedly defined to bridge the well-known heterogeneity gap between visual and textual modalities. Finally, an interactive draw-and-merge optimization method is proposed to ensure an optimal solution. Extensive experiments on several social image datasets including NUS-Wide, IAPRTC, MIRFlickr, ESP-Game and COCO demonstrate the superiority of the proposed approach to state-of-the-art baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call