Abstract

In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l 1-norm optimization problem which minimizes the l 1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l 1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods.

Highlights

  • In the community of pattern recognition, machine learning and computer vision, a commonly-used tenet is that the interested datasets lie in single or multiple linear subspaces

  • These five methods are sorted in order of decreasing of Peak Signal to Noise Ratio (PSNR) as follows: Robust GLRAM (RGLRAM), Generalized Low Rank Approximations of Matrices (GLRAM), Robust Principal Component Analysis (RPCA), PCAL1 and Total Variation (TV)

  • The PSNR of RGLRAM is 4.55 large than that of GLRAM on Olivetti Research Laboratory (ORL), and 5.28 on Yale. These results show that RGLRAM has the best recovery performance

Read more

Summary

Introduction

In the community of pattern recognition, machine learning and computer vision, a commonly-used tenet is that the interested datasets lie in single or multiple linear subspaces. The resulting two-dimensional subspace methods mainly include two-dimensional PCA (2dPCA) [10], two-dimensional SVD (2dSVD) [11], two-dimensional LDA (2dLDA) [12], two-directional two-dimensional PCA ((2d)2PCA) [13], Generalized Low Rank Approximations of Matrices (GLRAM) [14,15] and so on. Among them, the latter two methods have the equivalent tri-factorization formulations, that is, they use two-sided transformations rather than single-sided ones. For the sake of simplicity, we set M 1⁄4 fMigNi1⁄41; E 1⁄4 fEigNi1⁄41.Without considering the orthogonal constraints in problem (8), we construct its partial augmented Lagrange function: fmðL; R; M; E; YÞ

E Ei þ mDi
À LM1RT 1
Ethics Statement
Experimental Results
Method
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.