Abstract

Graph Neural Networks(GNNs) are powerful methods to analyze the non-Euclidean data. As a dominant type of GNN, Graph Convolutional Networks(GCNs) have wide applications. However, the analysis of the generalization error for GCNs with multilayer is limited. Based on the review of single-layer GCNs, this paper analyzes the generalization error of two-layers GCNs and extends the conclusion to the general GCNs models. Firstly, this paper examines two-layers GCNs and obtains the stability of the GCNs algorithm. And then, based on this algorithmic stability, the generalization stability of multilayer GCNs is obtained. This paper shows that the algorithmic stability of GCNs depends upon the graph filters and its product with node features as well as the training procedure. Furthermore, the generalization error gap of GCNs tends to be enlarged with more layers, which can interpret why GCNs with deeper layers have relatively poorer performance in test datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.