Abstract
Sparsity-based methods, such as wavelets, have been the state of the art for more than 20 years for inverse problems before being overtaken by neural networks. In particular, U-nets have proven to be extremely effective. Their main ingredients are a highly nonlinear processing, a massive learning made possible by the flourishing of optimization algorithms with the power of computers (GPU) and the use of large available datasets for training. It is far from obvious to say which of these three ingredients has the biggest impact on the performance. While the many stages of nonlinearity are intrinsic to deep learning, the usage of learning with training data could also be exploited by sparsity-based approaches. The aim of our study is to push the limits of sparsity to use, similarly to U-nets, massive learning and large datasets, and then to compare the results with U-nets. We present a new network architecture, called learnlets, which conserves the properties of sparsity-based methods such as exact reconstruction and good generalization properties, while fostering the power of neural networks for learning and fast calculation. We evaluate the model on image denoising tasks. Our conclusion is that U-nets perform better than learnlets on image quality metrics in distribution, while learnlets have better generalization properties.
Submitted Version (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.