Abstract

Abstract Low rank and sparse matrix estimation has been attracting significant interest in recent years. Generally, such a problem is modeled by imposing the l1-norm to pursuit a sparse and low rank matrix decomposition. However, the l1-norm is only a conservative sparse regularizer which leads to over-penalty. To remedy this issue, this paper presents an adaptive regularizer learning strategy to provide advanced low rank solution and avoid over-penalty. The new method is termed ARLLR. In the Bayesian inference, the prior distribution of the singular values is assumed to be Laplacian with hyper scale parameters. With the help of full Maximize A Posterior (MAP), we learn the optimal scale parameters by revealing its correlation to the inherent variables. We indicate that the adaptively estimated regularizer corresponds to the log function and the global minimum is given for the proposed non-convex problem. Furthermore, by employing the adaptive regularizer on the sparse part, a double log regularized low rank and sparse matrix decomposition model which is denoted by ARLLRE, is proposed. The ADMM algorithm is utilized to solve the ARLLRE problem, and the convergence of the algorithm is proved. In experiment, we use ARLLR for image denoising and ARLLRE for foreground and background extraction, respectively. Experimental results show that ARLLR enhances image denoising performance compared with the state-of-the-art image denoising algorithms in both quantity value and visual quality. Meanwhile, ARLLRE delivers excellent results in foreground and background extraction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call