Abstract
Learning multi-view data is a central topic for advanced deep model applications. Existing efforts mainly focus on exploring shared information to maximize the consensus among all the views. However, after reasonably discarding superfluous task-irrelevant noise, the view-specific information is equally essential to downstream tasks. In this paper, we propose to decouple the multi-view representation learning into the shared and specific information extractions with parallel branches, and seamlessly adopt feature fusion in end-to-end models. The common feature is obtained based on the view-agnostic contrastive learning and view-discriminative training to minimize the discrepancy within the views. Simultaneously, the specific feature is learned with orthogonality constraints to minimize the view-level correlation. Besides, the semantic information in the features is reserved with supervised training. After disentangling the representations, we fuse the mutually complementary common and specific features for downstream tasks. Particularly, we provide a theoretical explanation for our method from an information bottleneck perspective. Compared with state-of-the-art multi-view models on benchmark datasets, we empirically demonstrate the advantage of our method in several downstream tasks, such as ordinary classification and few-shot learning. In addition, extensive experiments validate the robustness and transferability of our approach, when applying the learned representation on the source dataset to several target datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.