Abstract

The features used in many multimedia analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in highly dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple views. We therefore propose a novel multi-view multi-task feature extraction (MVMTFE) framework for handling multi-view features for image classification. In particular, MVMTFE simultaneously learns the feature extraction matrix for each view and the view combination coefficients. In this way, MVMTFE not only handles correlated and noisy features, but also utilizes the complementarity of different views to further help reduce feature redundancy in each view. An alternating algorithm is developed for problem optimization and each sub-problem can be efficiently solved. Experiments on an real-world web image dataset demonstrate the effectiveness and superiority of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call