Abstract

Deep convolutional neural networks (CNNs) have recently made remarkable advances in single image super-resolution (SISR). The CNN structures of most existing SISR methods are just based on residual structures, dense structures, or their variants. However, these methods almost all adopt single-path structures, which makes them difficult to make full use of the complementary contextual information of the different ways of feature extraction (e.g., residual and dense connections). In this paper, we develop a novel dual-path attention network, which includes the dual-path attention groups (DPAGs) with dual skip connections (DSCs), in order to combine the advantages of both residual and dense connections for better SR performance. Each DPAG has several dual-path blocks (DPBs) and a path attention fusion (PAF). The DPBs realize the structure of the dual-path topology, while the PAF can further improve the discriminative representation ability by a channel attention (CA) mechanism, adaptively fuse the complementary contextual information produced by the two paths, and stabilize the network. Our DPAN can well pay attention to high-frequency information because each DSC contains a local skip connection and an adaptively weighted global skip connection (AWGSC), which can further adaptively bypass low-frequency features. Extensive experimental results demonstrate the superiority of the proposed DPAN in terms of both quantitative metrics and visual quality, compared with the current state-of-the-art SISR methods. For instance, compared with recent typical methods, for Bicubic (BI) degradation on the difficult dataset Urban100, our DPAN achieved the best PSNR of 33.22 dB for scale ×2 , 29.20 dB for scale ×3, and 26.99 dB for scale ×4, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call