Abstract

With the aim of retrieving high-quality images from corrupted versions, image restoration meets the extensive demand of application scenarios. State-of-the-art methods solve this problem by means of designing convolution blocks in multistage architectures. However, the existing methods blindly extract and fuse features without considering which parts of the features are more effective for image restoration. In this paper, we propose a significance-wise mechanism with the goal of identifying the importance degree of each region in the feature representation and providing crucial feature information. Through the calculation of nonlinear function in the significance-wise mechanism, the region which is more conducive to image restoration will get higher importance values. Two important types of information are generated within the attention mechanism, including: (i) feature-sufficient maps with abundant feature representations, and (ii) significance-wise maps with the importance degree of patch information in the corrupted images. With the coordination of feature-sufficient maps and significance-wise maps, the network architecture can focus higher attention on crucial parts of feature information. Furthermore, we design a multistage feature fusion block with the significance-wise mechanism. Compared with existing attention mechanisms, our significance-wise mechanism has the ability to identify and generate crucial feature representation for multiple image restoration tasks. Due to the novel design of the network, we successfully implement multiple restoration tasks only by fine-tuning the number of channels once on a network. Abundant quantitative and qualitative experiments demonstrate that the effective image restoration network (EIRN) outperforms existing state-of-the-art algorithms on eleven datasets across a series of image restoration tasks, including image deblurring, denoising, super-resolution, and deraining. The source code and pretrained models will be available at https://github.com/XinyueZhangqdu/EIRN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call