Abstract

Convolutional neural networks (CNNs) provide impressive empirical success in various tasks; however, their inner workings generally lack interpretability. In this paper, we interpret shallow CNNs that we have trained for the task of positive sparse signal denoising. We identify and analyze common structures among the trained CNNs. We show that the learned CNN denoisers can be interpreted as a nonlinear locally-adaptive thresholding procedure, which is an empirical approximation of the minimum mean square error estimator. Based on our interpretation, we train constrained CNN denoisers and demonstrate no loss in performance despite having fewer trainable parameters. The interpreted CNN denoiser is an instance of a multivariate spline regression model, and a generalization of classical proximal thresholding operators.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call