Abstract

Transform learning has been proposed as a new and effective formulation for analysis dictionary learning, where the $\ell _{0}$ norm or the $\ell _{1}$ norm are generally used as sparsity constraint. The sparse solutions can be obtained by the hard thresholding or the soft thresholding. The hard thresholding is actually a greedy algorithm, which only obtains the approximate solutions; while the soft thresholding has a certain bias for the large elements. In this paper, we propose to employ the $log$ regularizer instead of the $\ell _{0}$ norm and the $\ell _{1}$ norm in the overcomplete transform learning problem. Our minimization problem is nonconvex due to the $log$ regularizer. We propose to employ a simple proximal alternating minimization method, where a closed-form solution of the $log$ function could be obtained based on the proximal operator. Hence, an efficient and fast overcomplete transform learning algorithm is developed, which iterates based on the analysis coding stage and the transform update stage. The proposed algorithm can obtain sparser solutions and more accurate results from the theoretical analysis. Numerical experiments verify that the proposed algorithm outperforms existing transform learning approaches with the $\ell _{0}$ norm or the $\ell _{1}$ norm. Furthermore, the proposed algorithm is on par with the state-of-the-art image denoising algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.