Abstract

Multi-view representation learning aims to integrate multiple data information from different views to improve the task performance. The information contained in multi-view data is usually complex. Not only do different views contain different information, but also different samples of the same view contain different information. In the multi-view representation learning, most existing methods either simply treat each view/sample with equal importance, or set fixed or dynamic weights for different views/samples, which is not accurate enough to capture the information of dimensions of each sample and causes information redundancy, especially for high-dimensional samples. In this paper, we propose a novel unsupervised multi-view representation learning method based on instance-wise feature selection. A main advantage of instance-wise feature selection in this paper is that one can dynamically select dimensions that favor both view-specific representation learning and view-shared representation learning for each sample, thereby improving the performance from the perspective of model input. The proposed method consists of selector network, view-specific network and view-shared network. Specifically, selector network is used to obtain the selection template, which selects different number of dimensions conducive to representation learning from different samples to solve the sample heterogeneity problem; the view-specific network and view-shared network are used to extract the view-specific and view-shared representations, respectively. The selector network, view-shared network, and view-specific network are optimized alternately. Extensive experiments on various multi-view datasets with clustering and multi-label classification tasks demonstrate that the proposed method outperforms the state-of-the-art multi-view learning methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call