Abstract

Learning low-dimensional embeddings of graph data in curved Riemannian manifolds has gained traction due to their desirable property of acting as effective geometrical inductive biases. More specifically, models of Hyperbolic geometry such as Poincar\'{e} Ball and Lorentz/Hyperboloid Model have found applications for learning data with hierarchical anatomy. Gromov's hyperbolicity measures whether a graph can be isometrically embedded in hyperbolic space. This paper shows that adversarial attacks that perturb the network structure also affect the hyperbolicity of graphs rendering hyperbolic space less effective for learning low-dimensional node embeddings of the graph. To circumvent this problem, we introduce learning embeddings in pseudo-Riemannian manifolds such as Lorentzian manifolds and show empirically that they are robust to adversarial perturbations. Despite the recent proliferation of adversarial robustness methods in the graph data, this is the first work exploring the relationship between adversarial attacks and hyperbolicity while also providing resolution to navigate such vulnerabilities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call