Abstract

Most existing supervised subspace learning methods use the label information for high-level semantic exploration and learn one couple of common mapping matrices for all classes in the retrieval task. However, there are different semantic distributions among different classes and thus we propose to learn different mapping matrices for different classes in this paper, which facilitates learning more discriminative subspace. In addition, semantic overlap usually exists among different classes, which is reflected through common samples in different classes. Therefore, the multi-class joint subspace learning algorithm (MJSL) is proposed to distinct the different classes and mine the potential shared information of semantic overlap as much as possible. Specifically, the MJSL method considers exploring high-level semantic, keeping pair-wised closeness and selecting optimal features to obtain the most discriminative subspace for each class. Meanwhile, the trace-norm based joint learning is used for exploring the potential shared information of semantic overlap among different classes. Since the optimal mapping matrices have been learned via an iterative joint optimization algorithm with fast convergence, a linear SVM classifier is trained to establish the mapping relationship between multi-modal data and their potential semantic classes. Thus, the most related mapping matrices can be identified for each query adaptively and the retrieval performance can be promoted. Extensive experiments on two popular public datasets demonstrate that our algorithm outperforms several state-of-the-art cross-modal retrieval algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call