Abstract

In graph learning, it is fundamental to integrate the features from graph structure and node attributes. Towards this end, graph convolution technique has been devised based on the premise that the similarity of node attributes between two nodes is semantically consistent with their topological proximity. However, many real-networks are found to exhibit the semantic inconsistency, i.e., the phenomenon that directly connected nodes are dissimilar in their attributes. This work is concerned with two related issues: how do we quantitatively measure the semantic consistency between node attributes and graph structure? can we leverage this information to facilitate graph representation? To answer those questions, we first introduce a novel metric to evaluate the semantic consistency in a graph, and then we identify a set of key designs to encode the local semantic consistency information into a type of ego's node feature. Then, we fuse this new node feature with the original node attributes by concatenating the two parts using the semantic consistency metric as weight factor. Experiments on real-world datasets show that linear classifier (e.g. multilayer perceptrons) based on our unsupervised feature learning scheme achieves strong performance across the datasets, especially on the datasets with low semantic consistency, compared to the popular supervised GCNs and other competitive unsupervised graph representation learning models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call