Abstract

Due to the extensive applications of sparse representation, the algorithms of dictionary learning have received widespread attentions. In this paper, we proposed a new separable dictionary learning algorithm: improved separable dictionary learning (ISeDiL). Different with the traditional separable learning methods, we divided the learning procedure into two steps: sparse coding and dictionary optimization. In the first step, we used the separable fast iterative shrinkage-thresholding algorithm (SFISTA) to obtain the sparse coefficient matrices. In the second step, we projected the dictionaries to oblique manifold, and used the conjugate gradient method to optimize the dictionaries. By projecting the dictionaries to oblique manifold, the dictionaries can be optimized as a whole part. At last, we showed the experimental procedure, and the de-noising results of the proposed algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.