Abstract

Deep neural networks can significantly improve the quality of super-resolution. However, previous work has made insufficient use of low-resolution scale features and channel-wise information, hence hindering the representational ability of CNNs. To address these issues, a multi-scale fractal residual attention network (MFRAN) is proposed. Specifically, MFRAN consists of fractal residual blocks (FRBs), dual-enhanced channel attention (DECA), and dilated residual attention blocks (DRABs). Among them, FRB applies multi-scale extension rule to continuously expand into a fractal structure that detects multi-scale features; DRAB constructs a combined dilated convolution to learn a generalizable and expressive feature space with a larger receptive field; DECA employs one-dimensional convolution to achieve cross-channel information interaction, and enhance the flow of information between groups by channel shuffling. Then, we integrate horizontal feature representations via local residual and feature fusion. Extensive quantitative and qualitative evaluations of benchmark datasets show that our proposed approach outperforms state-of-the-art methods in terms of quantitative metrics and visual results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call