In multi-view representation learning (MVRL), the challenge of category uncertainty is significant. Existing methods excel at deriving shared representations across multiple views, but often neglect the uncertainty associated with cluster assignments from each view, thereby leading to increased ambiguity in the category determination. Additionally, methods like kernel-based or neural network-based approaches, while revealing nonlinear relationships, lack attention to category uncertainty. To address these limitations, this paper proposes a method leveraging the uncertainty of label distributions to enhance MVRL. Specifically, our approach combines uncertainty reduction based on label distribution with view representation learning to improve clustering accuracy and robustness. It initially computes the within-view representation of the sample and semantic labels. Then, we introduce a novel constraint based on either variance or information entropy to mitigate class uncertainty, thereby improving the discriminative power of the learned representations. Extensive experiments conducted on diverse multi-view datasets demonstrate that our method consistently outperforms existing approaches, producing more accurate and reliable class assignments. The experimental results highlight the effectiveness of our method in enhancing MVRL by reducing category uncertainty and improving overall classification performance. This method is not only very interpretable but also enhances the model’s ability to learn multi-view consistent information.
Read full abstract