Abstract

Graph convolutional network (GCN) has gained considerable attention and has been widely utilized in graph data analytics. However, training large GCNs presents considerable challenges owing to the inherent complexity of graph-structured data. Previous training algorithms frequently struggle with slow convergence speed caused by full-batch gradient descent on entire graphs and reduced model performance due to inappropriate node sampling methods. To address these issues, we propose a novel framework called Progressive Granular Ball Sampling Fusion (PGBSF). PGBSF leverages granular ball sampling to partition the original graph into a collection of subgraphs, thereby enhancing both training efficiency and detail capture. Then, it applies a progressive approach accompanied by a parameter-sharing strategy for incremental GCN model training, which results in robust performance and rapid convergence speed. This simple yet effective strategy considerably enhances classification accuracy and memory efficiency. The experiment results show that our proposed architecture consistently outperforms other baseline models in terms of accuracy across almost all datasets with different label rates. In addition, PGBSF improves GCN performance significantly on large and complex datasets. Moreover, GCN+PGBSF reduces time complexity by training on subgraphs and achieves the fastest convergence speed among all models, with a relatively small variance in loss during training.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.