Abstract

Many relational datasets, including relational databases, feature links of different types (e.g., actors act in movies, users rate movies), known as multi-relational, heterogeneous, or multi-layer graphs. Edge types/graph layers are often incompletely labeled. For example, IMDb lists Tom Cruise as a cast member of Mission Impossible, but not as its star. Inferring latent layers is useful for relational prediction tasks (e.g. predict Tom Cruise’s salary or his presence in other movies). This paper describes a Latent Layer Generative Framework - LLGF that extends graph encoder-decoder architectures to include latent layers. The decoder treats the observed edge type signalas a linear combination of latent layers. The encoder infers parallel node representations, one for each latent layer. We evaluate our proposed framework on six benchmark graph learning datasets. Qualitative evidence indicates that LLGF recovers ground truth layers well. For link prediction as a downstream task, we find that extending Variational Graph Auto-Encoders with LLGF increases link prediction accuracy compared to state-of-theart graph Variational Auto-Encoders (up to 6% AUC depending on the dataset).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call