Abstract

Given a dense tensor, how can we find latent patterns and relations efficiently? Existing Tucker decomposition methods based on Alternating Least Square (ALS) have limitations in terms of time and space since they directly handle large dense tensors to obtain the result of Tucker decomposition. Although few methods have tried to reduce their computational time by sampling tensors, sketching tensors, and efficient matrix operations, their speed and memory efficiency are limited. In this paper, we propose D-Tucker, a fast and memory-efficient method for Tucker decomposition on large dense tensors. D-Tucker consists of the approximation, the initialization, and the iteration phases. D-Tucker 1) compresses an input tensor by computing randomized singular value decomposition of matrices sliced from the input tensor, and 2) efficiently obtains orthogonal factor matrices and a core tensor by using SVD results of sliced matrices. Through experiments, we show that D-Tucker is up to 38.4× faster, and requires up to 17.2× less space than existing methods with little sacrifice in accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call