Abstract

Although many graph convolutional neural networks (GCNNs) have achieved superior performances in semisupervised node classification, they are designed from either the spatial or spectral perspective, yet without a general theoretical basis. Besides, most of the existing GCNNs methods tend to ignore the ubiquitous noises in the network topology and node content and are thus unable to model these uncertainties. These drawbacks certainly reduce their effectiveness in integrating network topology and node content. To provide a probabilistic perspective to the GCNNs, we model the semisupervised node classification problem as a topology-constrained probabilistic latent space model, probabilistic graph convolutional network (PGCN). By representing the nodes in a more efficient distribution form, the proposed framework can seamlessly integrate the node content and network topology. When specifying the distribution in PGCN to be a Gaussian distribution, the transductive node classification problems can be solved by the general framework and a specific method, called PGCN with the Gaussian distribution representation (PGCN-G), is proposed. To overcome the overfitting problem in covariance estimation and reduce the computational complexity, PGCN-G is further improved to PGCN-G+ by imposing the covariance matrices of all vertices to possess the identical singular vectors. The optimization algorithm based on expectation-maximization indicates that the proposed method can iteratively denoise the network topology and node content with respect to each other. Besides the effectiveness of this top-down framework demonstrated via extensive experiments, it can also be deduced to cover the existing methods, graph convolutional network, graph attention network, and Gaussian mixture model and elaborate their characteristics and relationships by specific derivations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call