Abstract

Recently, Bayesian modeling and variational inference (VI) were leveraged to enable the nonnegative factor matrix learning with automatic rank determination in tensor canonical polyadic decomposition (CPD), which has found various applications in big data analytics. However, since VI inherently performs block coordinate descent (BCD) steps over the functional space, it generally does not allow integration with modern large-scale optimization methods, making the scalability a critical issue. In this paper, it is revealed that the expectations of the variables updated by the VI algorithm is equivalent to the block minimization steps of a deterministic optimization problem. This equivalence further enables the adoption of inexact BCD method for devising a fast nonnegative factor matrix learning algorithm with automatic tensor rank determination. Numerical results using synthetic data and real-world applications show that the performance of the proposed algorithm is comparable with that of the VI-based algorithm, but with computation times reduced significantly.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call