Abstract

The residual structure may learn the entire input region indiscriminately because the residual connection can still learn well as the network depth grows. To a certain extent, the attention mechanism can focus the network’s attention to the interesting area, enhancing the learning performance of essential areas while decreasing the computational load for the system. As a result, the combination of these two advantages could have substantial research significance, for both improve the efficiency and reduce the computational load. A dense residual connection network that combine feature fusion attention approach in image super resolution process is proposed. The dense residual block is enhanced with pixel and channel attention blocks, and a dual-channel path design incorporating global maximum pooling and global average pooling is utilized. A hybrid loss function is also proposed in order to increase the network’s sensitivity to the maximum error between individual pixels. The PSNR/SSIM/ L ∞ performance metrics increased after applying the hybrid loss function and our attention techniques. The experimental results demonstrated that our novel approach has several advantages over some recent approaches, as well as showing good outcomes on many testing datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call