Abstract

In this work, we devote ourselves to the challenging task of Unsupervised Multi-view Representation Learning (UMRL), which requires learning a unified feature representation from multiple views in an unsupervised manner. Existing UMRL methods mainly focus on the learning process within the feature space while ignoring the valuable semantic information hidden in different views. To address this issue, we propose a novel approach called Semantically Consistent Multi-view Representation Learning (SCMRL), which aims to excavate underlying multi-view semantic consensus information and utilize it to guide the unified feature representation learning process. Specifically, SCMRL consists of a within-view reconstruction module and a unified feature representation learning module. These modules are elegantly integrated using a contrastive learning strategy, which serves to align the semantic labels of both view-specific feature representations and the learned unified feature representation simultaneously. This integration allows SCMRL to effectively leverage consensus information in the semantic space, thereby constraining the learning process of the unified feature representation. Compared with several state-of-the-art algorithms, extensive experiments demonstrate its superiority. Our code is released on https://github.com/YiyangZhou/SCMRL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call