Many graph neural networks (GNNs) are inapplicable when the graph structure representing the node relations is unavailable. Recent studies have shown that this problem can be effectively solved by jointly learning the graph structure and the parameters of GNNs. However, most of these methods learn graphs by using either a Euclidean or hyperbolic metric, which means that the space curvature is assumed to be either constant zero or constant negative. Graph embedding spaces usually have nonconstant curvatures, and thus, such an assumption may produce some obfuscatory nodes, which are improperly embedded and close to multiple categories. In this article, we propose a joint-space graph learning (JSGL) method for GNNs. JSGL learns a graph based on Euclidean embeddings and identifies Euclidean obfuscatory nodes. Then, the graph topology near the identified obfuscatory nodes is refined in hyperbolic space. We also present a theoretical justification of our method for identifying obfuscatory nodes and conduct a series of experiments to test the performance of JSGL. The results show that JSGL outperforms many baseline methods. To obtain more insights, we analyze potential reasons for this superior performance.
Read full abstract