Abstract

In dictionary learning, sparse regularization is used to promote sparsity and has played a major role in the developing of dictionary learning algorithms. ℓ1-norm is of the most popular sparse regularization due to its convexity and the related tractable convex optimization problems. However, ℓ1-norm leads to biased solutions and provides inferior performance on certain applications compared with nonconvex sparse regularizations. In this work, we propose a generalized minimax-concave (GMC) sparse regularization, which is nonconvex, to promote sparsity to design dictionary learning model. Applying the alternate optimization scheme, we use the forward–backward splitting (FBS) algorithm to solve the sparse coding problem. As the improvement, we incorporate Nesterov’s acceleration technique and adaptive threshold scheme into the FBS algorithm to improve the convergence efficiency and performance. In the dictionary update step, we apply the difference of convex functions (DC) programming and the DC algorithm (DCA) to address the dictionary update. Two dictionary update algorithms are designed; one updates the dictionary atoms one by one, and the other one updates the dictionary atoms simultaneously. The presented dictionary learning algorithms perform robustly in dictionary recovery. Numerical experiments are designed to verify the performance of proposed algorithms and to compare with the state-of-the-art algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call