Abstract

The Schatten quasi-norm is an approximation of the rank, which is tighter than the nuclear norm. However, most Schatten quasi-norm minimization (SQNM) algorithms suffer from high computational cost to compute the singular value decomposition (SVD) of large matrices at each iteration. In this paper, we prove that for any p, p1, p2>0 satisfying 1/p=1/p1+1/p2, the Schatten p-(quasi-)norm of any matrix is equivalent to minimizing the product of the Schatten p1-(quasi-)norm and Schatten p2-(quasi-)norm of its two much smaller factor matrices. Then, we present and prove the equivalence between the product and its weighted sum formulations for two cases: p1=p2 and p1≠p2. In particular, when p>1/2, there is an equivalence between the Schatten p-quasi-norm of any matrix and the Schatten 2p-norms of its two factor matrices. We further extend the theoretical results of two factor matrices to the cases of three and more factor matrices, from which we can see that for any 0<p<1, the Schatten p-quasi-norm of any matrix is the minimization of the mean of the Schatten (⌊1/p⌋+1)p-norms of ⌊1/p⌋+1 factor matrices, where ⌊1/p⌋ denotes the largest integer not exceeding 1/p.

Highlights

  • The affine rank minimization problem arises directly in various areas of science and engineering, including statistics, machine learning, information theory, data mining, medical imaging, and computer vision

  • When p > 1/2 and by setting the same value for p1 and p2, there is an equivalence between the Schatten p-(quasi-)norm of any matrix and the Schatten 2p-norms of its two factor matrices, where a representative example is the equivalent formulation of the nuclear norm, i.e., k X k∗ = min

  • The bi-nuclear and Frobenius/nuclear quasi-norms defined in our previous work [22] and the tri-nuclear quasi-norm defined in our previous work [29] are three important special cases of our unified scalable formulations for Schatten quasi-norms

Read more

Summary

Introduction

The affine rank minimization problem arises directly in various areas of science and engineering, including statistics, machine learning, information theory, data mining, medical imaging, and computer vision. Some representative applications include matrix completion [1], robust principal component analysis (RPCA) [2], low-rank representation [3], multivariate regression [4], multi-task learning [5], and system identification [6] To efficiently solve such problems, we mainly relax the rank function to its tractable convex envelope, that is, the nuclear norm k · k∗ (sum of the singular values, known as the trace norm or Schatten 1-norm), which leads to a convex optimization problem [1,7,8,9]. Proposed a family of iteratively re-weighted nuclear norm (IRNN) algorithms to solve various non-convex surrogate (including the Schatten quasi-norm) minimization problems. The bi-nuclear and Frobenius/nuclear quasi-norms defined in our previous work [22] and the tri-nuclear quasi-norm defined in our previous work [29] are three important special cases of our unified scalable formulations for Schatten quasi-norms

Notations and Background
A Unified Formulation for Schatten Quasi-Norms
Unified Schatten Quasi-Norm Formulations of Two Factor Matrices
Extensions to Multiple Factor Matrices
Numerical Experiments
Proofs of Main Results
Proof of Theorem 1
Proof of Theorem 3
Proof of Corollary 2
Conclusions
Findings
Methods
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.