Abstract

There is growing interest in social image classification because of its importance in web-based image application. Though there are many approaches on image classification, it is a great problem to integrate multi-modal content of social images simultaneously for social image classification, since the textual content and visual content are represented in two heterogeneous feature spaces. In this study, we proposed a multi-modal learning algorithm to fuse the multiple features through their correlation seamlessly. Specifically, we learn two linear classification modules for the two types of feature, and then they are integrated by the l2 normalization via a joint model. With the joint model, the classification based on visual feature can be reinforced by the classification based on textual feature, and vice verse. Then, the test image can be classified based on both the textual feature and visual feature by combing the results of the two classifiers. The evaluate the approach, we conduct some experiments on real-world datasets, and the result shows the superiority of our proposed algorithm against the baselines.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call