Abstract

Nonnegative sparse representation has become highly popular in certain applications in the context of signals and corresponding dictionaries that have nonnegative limitations. Applying an adaptive dictionary for sparse representation of nonnegative signals has been shown to be very effective; but constructing adaptive dictionary, i.e., dictionary learning, remains a challenge. In this paper, we attempt to design an effective and tailored algorithm for the sparse representation of nonnegative signals. We consider the so-called determinant sparsity measure formed with the determinant of the Gram matrix of sparse coefficients. Based on the determinant measure, we formulate the nonnegative dictionary learning problem that is optimization of a non-convex function. For reducing the computational complexity of the optimization, the difference of convex functions (DC) programming is employed since the non-convex function can be set to the form of difference of two convex functions. Because there are two variables in the dictionary learning problem that cannot be solved directly by DC programming, an alternation scheme is employed to treat the sparse coding and dictionary update of the dictionary learning problem alternatively with the DC programming. Our proposed novel dictionary learning algorithm has three advantages: 1) the higher recovery ratio is achieved by the proposed algorithm, even when the signals are weakly sparse; 2) the generalization ability of the proposed algorithm is greater, more often converging to a global solution than to local optimal solutions; and 3) the obtained sparse solution has a low-overlapping property that enables the solutions to be sparse and makes the proposed algorithm robust. Numerical experiments and experiments using real-world data demonstrate the effectiveness and applicability of the proposed algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call