Abstract

Graph attention networks is a popular method to deal with link prediction tasks, but the weight assigned to each sample is not focusing on the sample's own performance in training. Moreover, since the number of links is much larger than nodes in a graph, mapping functions are usually used to map the learned node features to link features, whose expression of node similarity determines the quality of link feature learning. To tackle the above issues, a new model graph attention networks based on Radial Basis Function (RBF) with squeeze loss is proposed, including two improvements. Firstly, RBF function with extended parameters is used to transform the node features output by attention layer into link features. The result of link feature embedding can be improved by shortening the distance of nodes with links and enlarging the distance of nodes without links in vector space. Secondly, squeeze loss is designed to adjust the loss according to the performance of samples in training and change the proportion of sample loss in the loss function to allocate training resources reasonably. The link prediction task performed on datasets shows that the performance of proposed method is better than baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call