Abstract

Convolutional dictionary learning (CDL) is a widely used technique in many applications on the signal/image processing and computer vision fields. While many algorithms have been proposed in order to improve the computational run-time performance during the training process, a thorough analysis regarding the direct relationship between the reconstruction performance and the dictionary features (hyper-parameters), such as the filter size and filter bank’s cardinality, has not yet been presented.As arbitrarily configured dictionaries do not necessarily guarantee the best possible results during the test process, a correct selection of the hyper-parameters would be very favorable in the training and testing stages. In this context, this works aims to provide an empirical support for the choice of hyper-parameters when learning convolutional dictionaries. We perform a careful analysis of the effect of varying the dictionary’s hyper-parameters through a denoising task. Furthermore, we employ a recently proposed local $\ell_{0, \infty}$ norm as a sparsity measure in order to explore possible correlations between the sparsity induced by the learned filter bank and the reconstruction quality at test stage.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.