Abstract

Recent deep learning has shown great potential in super-resolution (SR) tasks. However, most deep learning-based SR networks are optimized via pixel-level loss (i.e., L1, L2, and MSE), which forces the networks to output the average of all possible predictions, leading to blurred details. Especially in SR tasks with large scaling factors (i.e., ×4, ×8), the limitation is further aggravated. To alleviate this limitation, we propose a Gradient-Prior-based Super-Resolution network (GPSR). Specifically, a detail-preserving Gradient Guidance Strategy is proposed to fully exploit the gradient prior to guide the SR process from two aspects. On the one hand, an additional gradient branch is introduced into GPSR to provide the critical structural information. On the other hand, a compact gradient-guided loss is proposed to strengthen the constraints on the spatial structure and to prevent the blind restoration of high-frequency details. Moreover, two residual spatial attention adaptive aggregation modules are proposed and incorporated into the SR branch and the gradient branch, respectively, to fully exploit the crucial intermediate features to enhance the feature representation ability. Comprehensive experimental results demonstrate that the proposed GPSR outperforms state-of-the-art methods regarding both subjective visual quality and objective quantitative metrics in SR tasks with large scaling factors (i.e., ×4 and ×8).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call