Abstract

Multi-view data has two basic characteristics: consensus property and complementary property, in which complementary information refers to all view-specific information. Inspired by the popular saying “The whole is greater than the sum of its parts”, we introduce the concepts of “parts”, “sum of its parts” and “whole” into multi-view feature learning. When view-specific information is regarded as the “parts”, the complementary information consisting of all view-specific information would correspond to the ”sum of its parts”. To explore the “whole” information, we propose the Learning Enhanced Specific Representations for Multi-view Feature Learning (MvESR) approach, which points to learning the enhanced view-specific information through beneficial interactions between views. Specifically, MvESR concatenates all view-specific representations as the “sum” information. Based on the “sum” information, MvESR obtains the enhanced view-specific information through an element-wise addition between view-specific representation and the “sum” representation. Then the complementary information consisting of enhanced view-specific representations can be regarded as the “whole”. In addition, MvESR obtains cross-view consensus information between each pairwise views, then concatenates them as fused cross-view consensus information. Considering that different representations may have different contributions for classification, we design an adaptive-weighting loss fusion strategy for multi-view classification. Experimental results on six large-scale public datasets verify that the proposed approach outperforms the compared methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call