Abstract

DNN-based super-resolution(SR) models inherit the black-box nature of DNN and present low transparency. However, few works focus on interpreting low-level SR models and the limited existing gradient-based interpretability methods often require large computational costs and produce spurious/noisy feature attributions with a mixture of positive and negative signals. In this paper, we propose a gradient-based attribution method L2X(Learning to eXplain) to provide post-hoc visualization and interpretation for SR models by quantifying the attribution of individual features with regard to the SR output and generating a heatmap in pixel/input space. L2X relies on the forward-pass activations and efficiently propagates the important signal from the output neuron through the layers to the input in one pass. Besides, both positive and negative attributions are taken into account during the back-propagation process. We conduct cross-architectural analysis on State-of-the-Art SR networks and investigate the inner workings of them. We experimentally demonstrate the potential of L2X as a research tool to diagnose and visualize the most relevant features of the current SR model and offer insights into SR models to make further improvements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call