Abstract

Node representation learning has attracted increasing attention due to its efficacy for various applications on graphs. However, fairness is a largely under-explored territory within the field, although it is shown that the use of graph structure in learning amplifies bias. To this end, this work theoretically explains the sources of bias in node representations obtained via graph neural networks (GNNs). It is revealed that both nodal features and graph structure lead to bias in the obtained representations. Building upon the analysis, fairness-aware data augmentation frameworks are developed to reduce the intrinsic bias. Our theoretical analysis and proposed schemes can be readily employed in understanding and mitigating bias for various GNN-based learning mechanisms. Extensive experiments on node classification and link prediction over multiple real networks are carried out, and it is shown that the proposed augmentation strategies can improve fairness while providing comparable utility to state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call