Recently, the quality of generated images in image super-resolution (SR) has significantly improved due to the widespread application of convolutional neural networks. Existing super-resolution methods often overlook the importance of network width and instead prioritize designing deeper network structures to improve performance. Deeper networks, however, demand greater computational resources and are more challenging to train, making them less suitable for devices with limited performance. To address this issue, we propose a novel method called the disentangled feature fusion network (DFFN). Specifically, we construct dual feature extraction streams (DFES) using the strategies of feature disentanglement and parameter sharing, which increase the network's width without increasing the number of parameters. To facilitate the interaction of information between the dual feature extraction streams, we design a feature interaction module (FIM) that separately enhances high and low-frequency features. Furthermore, a feature fusion module (FFM) is presented to efficiently fuse different levels of high and low-frequency feature information. The proposed DFFN integrates DFES, FIM, and FFM, which not only widens the network without increasing parameters but also minimizes the loss of crucial feature information. Extensive experiments on five benchmark datasets demonstrate that our method significantly outperforms other state-of-the-art lightweight super-resolution methods, achieving a superior balance between parameter quantity, reconstruction performance, and model complexity. The code is available at https://github.com/zhoujy-aust/DFFN.
Read full abstract