Abstract

Convolution-based networks show impressive performance in Single Image Super-Resolution (SISR) task by making full use of local patterns. However, it requires deeper structure to capture the long-range dependencies which is inefficient. Hence, the Non-Local Attention (NLA) is introduced to search similar patterns globally and effectively exchange information globally. Nevertheless, the naïve NLA only exchanges information in original spatial scale and has quadratic computational complexity with spatial size of input image. Besides, the global fusion of all similar patterns may lead to blurring, which deviates from the goal of SISR to accurately recover image details. Hence, the naïve NLA is not suit for SISR. To solve this problem, we propose a novel Cross-Scale Collaborative Attention (CSCA) which is designed for SISR. Due to similar patterns tend to occur at different locations and scales, we explore self-similarity and cross-scale similarity simultaneously on multi-scale feature pyramid to utilize more rich details for improving the performance of recovery. To guarantee the recovered details are accurate, we aggregate features in each layer of pyramid into groups and only exchange information between the most relevant groups across multi-scale. Hence, CSCA is both efficient and can accurately recover details using the rich information contained in multi-scale. Furthermore, we propose a Cross-Scale Collaborative Network (CSCN) by inserting a few CSCA modules into a simple backbone network. Extensive experiments show the effectivity and efficiency of CSCA, and state-of-the-art performance of CSCN in SISR task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call