Abstract

The idea of the paper concentrates on an iterative learning process in Graph Convolution Networks (GCNs) involved in two vital steps: one is a message propagation (message passing) step to aggregate neighboring node features via aggregators performed, and another is an encoding output step to encode node feature representations by using updaters. In our model, we propose a novel affinity-aware encoding as an updater in GCNs, which aggregates the neighboring nodes of a node while updating this node’s features. By utilizing affinity values of our encoding, we order the neighboring nodes to determine the correspondence between encoding functions and the neighboring nodes. Furthermore, to explicitly reduce the model size, we propose a lightweight variant of our updater that integrates Depth-wise Separable Convolution (DSC) into it, namely Depth-wise Separable Graph Convolution (DSGC). Comprehensive experiments conducted on graph data demonstrate that our models’ accuracy improved significantly for graphs of low-dimensional node features. Also, performed in the low-dimensional node feature space we provide state-of-the-art results on two metrics (Macro-f1 and Matthews correlation coefficient (MCC)). Besides, our models are robust when taking different low-dimensional feature selection strategies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.