Abstract

Learning convolutional networks on graphs have been a popular topic for machine learning on graph-structured data and achieved state-of-the-art results on various practical tasks. However, most existing works ignore the impact of per-class distribution, therefore their performance may be limited due to the diversity of various categories. In this paper, we propose a novel class-aware progressive self-training (CPS) algorithm for training graph convolutional networks (GCNs). Compared to other self-training algorithms for GCNs’ learning, the proposed CPS algorithm leverages the class distribution to update the original graph structure in each self-training loop, including: (a) find these high-confident unlabeled nodes in the graph for each category to add pseudo labels, in order to enlarge the current set of labeled nodes; (b) delete these noisy edges between different classes for graph sparsification. Then, the optimized graph is used for next self-training loops in hopes of enhancing the classification performance. We evaluate the proposed CPS on several datasets commonly used for GCNs’ learning, and the experimental results show that the proposed CPS algorithm outperforms other baselines.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.