Abstract
Image enhancement techniques are commonly used to improve problems such as lack of brightness, high noise, and low contrast in low-light images. Notably, deep learning-based approaches have recently achieved substantial advancements in this domain. However, learning-based methods often require many parameters and multi-layer network structures to achieve high-quality enhancement effects, which limits their application in real-time image processing. To solve this problem, a lightweight Feature Activation Guided Multi-Receptive Field Attention Network (FAMANet) is designed in this paper. The Wavelet Feature Activation Block (WFAB) introduced in the network utilizes the discrete wavelet transform and residual connection to achieve selective activation of image features, thus reducing the redundant information in the feature map and improving the computational efficiency. In addition, the Multi-Receptive Field Attention (MRFA) introduced in this paper addresses the issue of inadequate pixel information and feature map loss stemming from a single input image by concentrating on the image structure, spanning from intricate details to the overall composition. By better-utilizing image information and distinguishing between global and local features, MRFA can improve the speed and efficiency of real-time image processing. After sufficient experimental validation, FAMANet significantly outperforms state-of-the-art methods in low-light image enhancement and exposure correction tasks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.