Abstract

Link prediction is a fundamental problem in network modeling. A family of link prediction approaches is to treat network data as an exchangeable array whose entries can be explained by random functions (e.g., block models and Gaussian processes) over latent node factors. Despite their powerful ability in modeling missing links, these models tend to have a large computational complexity and thus are hard to deal with large networks. To address this problem, we develop a novel variational random function model by defining latent Gaussian processes on exchangeable arrays. This model not only inherits the ability of Gaussian process to describe the nonlinear interactions between nodes, but also enjoys significant reduction on computational complexity. To further make the model scalable to large network data, we develop an efficient key-value-free strategy under the map-reduce framework to tremendously reduce the inference time. Experimental results on large network data have demonstrated both the efficacy and efficiency of the proposed method over state-of-the-arts methods in network modeling.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call