Abstract

We propose a framework for the definition of neural models for graphs that do not rely on backpropagation for training, thus making learning more biologically plausible and amenable to parallel implementation. Our proposed framework is inspired by Gated Linear Networks and allows the adoption of multiple graph convolutions. Specifically, each neuron is defined as a set of graph convolution filters (weight vectors) and a gating mechanism that, given a node and its topological context, generates the weight vector to use for processing the node’s attributes. Two different graph processing schemes are studied, i.e., a message-passing aggregation scheme where the gating mechanism is embedded directly into the graph convolution, and a multi-resolution one where neighboring nodes at different topological distances are jointly processed by a single graph convolution layer. We also compare the effectiveness of different alternatives for defining the context function of a node, i.e., based on hyperplanes or on prototypes, and using a soft or hard-gating mechanism. We propose a unified theoretical framework allowing us to theoretically characterize the proposed models’ expressiveness. We experimentally evaluate our backpropagation-free graph convolutional neural models on commonly adopted node classification datasets and show competitive performances compared to the backpropagation-based counterparts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call