Abstract

Dictionary learning algorithms have been successfully applied to a number of signal and image processing problems. In some applications, however, the observed signals may further have a multisubspace structure that enables block-sparse signal representations. For this case, a new algorithm for learning block-structured dictionaries for block-sparse signal representations is proposed in this paper. It is obtained by solving a sequence of penalized low rank matrix approximation problems where the $\ell _{1,2}$ -norm is introduced as a penalty promoting block sparsity and then using a block coordinate descent approach to estimate the unknowns. The proposed algorithm has the advantage of involving simple closed form solutions for both, the sparse coding and dictionary update stages. In particular, the sparse coding stage involves a simple shrinkage operation related to soft thresholding and it is related to the uniformly most powerful invariant testing procedure. Experimental results showing the improved efficacy and significant gain in computational time, offered by the proposed algorithm over the usual K singular value decomposition block extension are presented.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call