Abstract

Multimedia content analysis and understanding, such as action recognition and image classification, is a fundamental research problem. One effective strategy to improve the performance is designing discriminative visual representation, for example combining multiple feature sets for representation. However, simply combing these features may cause high dimensionality and lead to noises. Feature selection and fusion are common choices for multiple feature representation. At the same time, multi-task feature learning has been proven to be an effective method by many researches. In this paper, we propose a multi-task multi-view feature selection and fusion method which chooses and fuses discriminative features. For discriminative feature selection, we learn the selection matrix W by the minimization of the trace ratio objective function. For multiple tasks measurement, we employ the l 2,1-norm regularization to solve single task and share information among tasks. For multiple feature fusion, we incorporate local structures of each view in the Laplacian matrix. Since the Laplacian matrix is constructed in unsupervised manner and scaled category indicator matrix is solved iteratively, our work is fully unsupervised. Experimental results on four action recognition datasets and five image classification datasets demonstrate the effectiveness of multi-task multi-view feature selection and fusion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call