Abstract
In recent years, a series of matching pursuit and hard thresholding algorithms have been proposed to solve the sparse representation problem with -norm constraint. In addition, some stochastic hard thresholding methods were also proposed, such as stochastic gradient hard thresholding (SG-HT) and stochastic variance reduced gradient hard thresholding (SVRGHT). However, each iteration of all the algorithms requires one hard thresholding operation, which leads to a high per-iteration complexity and slow convergence, especially for high-dimensional problems. To address this issue, we propose a new stochastic recursive gradient support pursuit (SRGSP) algorithm, in which only one hard thresholding operation is required in each outer-iteration. Thus, SRGSP has a significantly lower computational complexity than existing methods such as SG-HT and SVRGHT. Moreover, we also provide the convergence analysis of SRGSP, which shows that SRGSP attains a linear convergence rate. Our experimental results on large-scale synthetic and real-world datasets verify that SRGSP outperforms state-of-the-art related methods for tackling various sparse representation problems. Moreover, we conduct many experiments on two real-world sparse representation applications such as image denoising and face recognition, and all the results also validate that our SRGSP algorithm obtains much better performance than other sparse representation learning optimization methods in terms of PSNR and recognition rates.
Highlights
In recent years, sparse representation has been proved to be a useful approach to represent or compress high dimensional signals
Inspired by GraSP [22], which is a well-known gradient support pursuit method, we propose an efficient stochastic recursive gradient support pursuit (SRGSP) algorithm to approximate the solution to Problem (2), as outlined in Algorithm 1: Stochastic Recursive Gradient Support Pursuit (SRGSP)
We only use the two real-world applications to illustrate the excellent performance of our SRGSP algorithm against other sparse learning optimization methods including GraSP [22], stochastic gradient hard thresholding (SG-HT) [23], stochastic variance reduced gradient hard thresholding (SVRGHT) [24], and loopless semi-stochastic gradient descent with less hard thresholding (LSSG-HT) [34]
Summary
Sparse representation has been proved to be a useful approach to represent or compress high dimensional signals. The unknown signal of interest is regarded as a sparse combination of a few columns from a given dictionary, and this problem is usually formulated as a sparsity constrained problem. Such sparse representation problems are common in the fields of image denoising, image inpainting, and face recognition or others such as [2,3,4]. Image denoising is a classical problem to improve image quality in computer vision The aim of this problem is to recover the clean image x from the noisy image y = x + e, where e is additive white Gaussian noise in general [5]. It can be realized generally by the following three types of methods: transform domain [6], spatial filtering [7], and dictionary learning-based methods [8,9]
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have