Abstract

Sparse representation and Dictionary learning have attracted a lot of research attention in the last couple of decades and have provided state of the art results in many fields such as denoising, classification, inpainting and compression. However, applying general dictionary learning such as Method of Optimal Directions and Recursive Least Squares Dictionary Learning Algorithm can be computationally expensive, due to the large amount of free variables to be learned. Also sometimes the signal class has obvious repetitive structure which could benefit from a structured dictionary. One way to deal with these shortcomings is to impose a structure on the dictionary itself, for example the dictionary can be sparse or the atoms can be shift-invariant. In practice, imposing a structure means limiting the number of free variables. There are many examples of structured dictionaries such as double sparsity model or shift-invariant dictionaries. We have recently proposed a closed form solution to impose arbitrary structures onto a dictionary, called Flexible Structure Dictionary Learning. In this paper, we use this method to impose shift-invariant structure when training a dictionary. This structure allows us to not only simplify the original solution and make it computationally feasible to be used for large signals but also extend the concept of shift-invariance to include variable sized shifts in different atoms. The proposed dictionary update step finds all the free variables in all the atoms jointly, whereas some shift-invariant structured dictionaries in the recent literature, update one atom at a time. We have compared our proposed method with a general dictionary learning method and another shift-invariant method. Results show that signal approximation can be a promising application.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call