Abstract

It is interesting and challenging to learn underlying semantics from multimodal data of different modalities, which carry their own contribution to high-level semantics. However, multimodal data are usually represented with heterogeneous features. It is difficult to learn a semantic subspace where multimodal correlation is learned and preserved. In this paper, we analyze sparse canonical correlation for multimodal data in heterogeneous feature dimension reduction; moreover, we propose subspace optimization strategy with structural multi-feature fusion, which fuse structural content correlation learning result and graph-based semantic correlation learning result into an objective function. Our algorithm has been applied to content based multimedia applications, including image classification and multimedia retrieval. Comprehensive experiments have demonstrated the superiority of our method over several existing algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call