Abstract

Semi-supervised multi-view learning (SML) is a hot research topic in recent years, with webpage classification being a typical application domain. The performance of SML is further boosted by the successful introduction of graph convolutional network (GCN) for learning discriminant node representations. However, there remains much space to improve the GCN-based SML technique, particularly on how to adaptively learn optimal graph structures for multi-view graph convolutional representation learning and make full use of the label and structure information in labeled and unlabeled multi-view samples. In this paper, we propose a novel SML approach named semi-supervised multi-view graph convolutional networks (SMGCN) for webpage classification. It contains a multi-view graph construction module and a semi-supervised multi-view graph convolutional representation learning module, which are integrated into a unified network architecture. The former aims to obtain optimal graph structure for each view. And the latter performs graph convolutional representation learning for each view, and provides an inter-view attention scheme to fuse multi-view representations. Network training is guided by the losses defined on both label and feature spaces, such that the label and structure information in labeled and unlabeled data is fully explored. Experiments on two widely used webpage datasets demonstrate that SMGCN can achieve state-of-the-art classification performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.