Abstract

In this paper, we deal with two research issues: the automation of visual attribute identification and semantic relation learning between visual attributes and object categories. The contribution is two-fold, firstly, we provide uniform framework to reliably extract both categorical attributes and depictive attributes. Secondly, we incorporate the obtained semantic associations between visual attributes and object categories into a text-based topic model and extract descriptive latent topics from external textual knowledge sources. Specifically, we show that in mining natural language descriptions from external knowledge sources, the relation between semantic visual attributes and object categories can be encoded as Must-Links and Cannot-Links, which can be represented by Dirichlet-Forest prior. To alleviate the workload of manual supervision and labeling in image categorization process, we introduce a semi-supervised training framework using soft-margin semi-supervised SVM classifier. We also show that the large-scale image categorization results can be significantly improved by combining automatically acquired visual attributes. Experimental results show that the proposed model achieves better ability in describing object-related attributes and makes the inferred latent topics more descriptive.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call