Abstract

In paper, we focus on the computation of the principal components for a general tensor, known as the tensor principal component analysis (PCA) problem. It has been proven that the general tensor PCA problem is reducible to its matricization form when the order is even. Usually it is considered as low-rank matrix completion problems theoretically. It is common to consider nuclear norm as a surrogate of the rank operator since it is the tightest convex lower bound of the rank operator under certain condition. However, most nuclear norm minimization based approaches involve numbers of singular value decomposition (SVD) operations. Given a matrix X E Rm× n, the time complexity of SVD operation is O(mn2), which brings prohibitive computational burden to apply these methods in real applications. However, the problem is non-convex, and the proximal mapping associated with non-convex regularization is not easy to compute. It is always solved by the Linearized Alternating Direction Method of Multipliers (LADMM). Despite the success of LADMM in practice, it remains unknown if LADMM is convergent in solving such non-convex compositely regularized optimization. In this paper, we firstly present a detailed convergence analysis of the LADMM algorithm for solving non-convex compositely regularized optimization with a large class of non-convex penalties. Furthermore, we propose a new efficient and scalable algorithm for matrix principal component analysis called Proximal Linearized Alternating Direction Method of Multipliers for Principal Component Analysis(PLADMPCA). Different from traditional matrix factorization methods, PLADMPCA utilizes the linearization technique to formulate the matrix as an outer product of vectors, which greatly improves the computational efficacy compared to matrix factorization method. We empirically evaluate the proposed algorithm PLADMPCA on synthetic tensor data with different order. Results have shown that PLADMPCA have much better computational cost to matrix factorization based method. At the same time, it outperforms the state-of-art SVD based matrix completion algorithms by similar or better reconstruction accuracy with enormous advantages on efficiency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.