Abstract

The deep-learning-based super-resolution (SR) methods require an avalanche of training images. However, they do not adapt model parameters in test time to cope with the novel blur kernel scenarios. Even though the recent meta-learning-based SR techniques adapt trained model parameters leveraging a test image’s internal patch recurrence, they need heavy pre-training on an external dataset to initialize. Also, the shallow internal information exploration and failure to amplify the salient edges impacts blurry SR image generation. Besides, model inferencing gets delayed due to a threshold-dependent adaptation phase. In this paper, we present Multi-scale Optimized Attention-aware Meta-Learning framework for SR (MAML-SR) to explore the multi-scale hierarchical self-similarity of recurring patches in a test image. Precisely, without any pre-training, we directly meta-train our model with a second-order optimization having the first-order adapted parameters from the intermediate scales, which are again directly supervised with the ground-truth HR images. At each scale, non-local self-similarity is maximized along with the amplification of salient edges using a novel cross-scale spectro-spatial attention learning unit. Also, we drastically reduce the inference delay by putting a metric-dependent constraint on the gradient updates for a test image. We demonstrate our method’s superior super-resolving capability over four benchmark SR datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call