In the field of artificial intelligence, combining transformers and convolutional neural networks (CNNs) to improve performance has become a popular solution for various image restoration tasks. However, the hyperparameters related to feature levels are empirical, leading to the inevitable presence of redundant features that hinder effective image restoration. Additionally, the current method of fusing global and local information is simple and direct, failing to fully exploit the potential of hybrid architectures. To address this issue, we propose a key feature fusion hybrid network (KF2H-Net) that reduces redundancy and dynamically fuses key features. On one hand, we create different learnable selection mechanisms within the hybrid network’s various units to choose global key features and local key features, enhancing the depth perception and selection capabilities for different features. On the other hand, through a parameter fusion module for dynamic feature fusion, we refine the multi-feature fusion method to emphasize the more critical features for image restoration. In order to verify the general performance of the proposed KF2H-Net, we specially selected three typical scenarios (underwater, low-light, and haze) for testing. KF2H-Net represents a novel approach to hybrid models addressing practical applications in the field of artificial intelligence. Extensive experiments show that KF2H-Net achieves state-of-the-art performance across different scenarios.
Read full abstract