Abstract

The low-rank approximation of big data matrices and tensors plays a pivotal role in many modern applications. Although, a truncated version of the singular value decomposition (SVD) furnishes the best approximation, its computation is challenging on modern, multicore architectures. Recently, the randomized subspace iteration has shown to be a powerful tool in approximating large-scale matrices. In this paper we present a two-sided variant of the randomized subspace iteration. Novel in our work is the exploitation of the unpivoted QR factorization, rather than the SVD, for factorizing the compressed matrix. Hence our algorithm is a randomized rank-revealing URV decomposition. We prove the rank-revealingness of our algorithm by establishing bounds for the singular values as well as the other blocks of the compressed matrix. We further provide bounds on the error of the low-rank approximations of the proposed algorithm, in both 2- and Frobenius norm. In addition, we employ the proposed algorithm to efficiently compute low rank tensor decompositions: we present two randomized algorithms, one for the truncated higher-order SVD, and the other for the tensor SVD. We conduct numerical tests on (i) various classes of matrices, and (ii) synthetic tensors and real datasets to demonstrate the efficacy of the proposed algorithms.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.