Abstract

The rank-(L,L,1) block term decomposition (BTD) of the tensor has recently received increasing attention in diverse high-order data processing, e.g., hyperspectral image restoration and blind source separation. However, the standard alternating least squares algorithm for the rank-(L,L,1) BTD (BTD-ALS) is computationally expensive, which hinders its application on real large-scale data. In this paper, we propose a fast sketching-based algorithm for the rank-(L,L,1) BTD (FastBTD) to address the computational burden of BTD-ALS. Since the dominant cost of BTD-ALS at each iteration is the sequence of large-scale least squares subproblems, we first project these original subproblems into the low-dimensional subspace by the tensor sketching operator. Then, we can fastly obtain the approximate solutions of the original subproblems by solving these sketched small-scale subproblems at each iteration of FastBTD. The computational complexity of FastBTD (O(2JILR+JIR)) is significantly lower than that of BTD-ALS (O(2I3LR+I3R)) for solving the least squares subproblems at each iteration, when J≪I2. Moreover, we provide the theoretical error bound for FastBTD. Extensive experiments on both synthetic and real data demonstrate that FastBTD achieves substantial speedup while maintaining accuracy compared with BTD-ALS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call