Abstract

A large-scale dynamic network is a fundamental data source in many big-data-related applications, which can be seamlessly described by an HDI tensor. Yet such an HDI tensor contains plenty of useful knowledge regarding various desired patterns like potential links in a dynamic network. An LFT model built by a Stochastic Gradient Descent (SGD) solver can acquire such knowledge from an HDI tensor. Nevertheless, an SGD-based LFT model suffers from slow convergence that impairs its efficiency on a large-scale dynamic network. To tackle this issue, this chapter represents a proportional-integral-derivative (PID)-incorporated LFT model. It rebuilds an adjusted instance error based on the PID control principle, and then replaces it into an SGD solver to improve the convergence rate. Empirical studies on two large-scale dynamic networks generating from a real application show that the proposed PLFT model is superior to several state-of-the-art models in terms of convergence rate and computational efficiency when predicting missing directed and weighted links in a given dynamic network.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call