Multi-scale feature fusion has been recognized as an effective strategy to boost the quality of low-light images. However, most existing methods directly extract multi-scale contextual information from severely degraded and down-sampled low-light images, resulting in a large amount of unexpected noise and degradation contaminating the learned multi-scale features. Moreover, there exist large redundant and overlapping features when directly concatenating multi-scale feature maps, which fails to consider different contributions of different scales. To conquer the above challenges, this paper presents a novel approach termed progressive Refined-Mixed Attention Network (RMANet) for low-light image enhancement. The proposed RMANet first targets a single-scale pre-enhancement and then progressively increases multi-scale spatial-channel attention fusion in a coarse-to-fine fashion. Additionally, we elaborately devise a Refined-Mixed Attention Module (RMAM) to first learn a parallel spatial-channel dominant features and then selectively integrate dominant features in the spatial and channel dimensions across multiple scales. Noticeably, our proposed RMANet is a lightweight yet flexible end-to-end framework that adapts to diverse application scenarios. Thorough experiments carried out upon three popular benchmark databases demonstrate that our approach surpasses existing methods in terms of both quantitative quality metrics and visual quality assessment. The code will be available at https://github.com/kbzhang0505/RMANet.
Read full abstract