Abstract

With the development of artificial intelligence, deep learning has been widely used in image super-resolution reconstruction. To solve the problems of feature extraction insufficiency, detail loss, and gradient disappearance in super-resolution reconstruction based on traditional deep learning, we propose a lightweight multihierarchical feature fusion network for single-image super-resolution. An important part of our network is dual residual block. To better extract features and reduce the amount of parameters as much as possible, the dual residual block we designed is an excite-and-squeeze structure. To transmit feature information, webadd autocorrelation weight unit into dual-residual block, which can weight each channel according to the image feature information. Extensive experiments show that our method is significantly better than LapSRN, MSRN, and other representative methods. The PSNR on SET14, URBAN100, and MANGA109 datasets are improved by 5 dB and SSIM is improved by 4% compared with the baseline method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call