Abstract
Low-light image enhancement (LLIE) aims at refining illumination and restoring the details of low-light images. However, current deep LLIE models still face two crucial issues causing blurred textures and inaccurate illumination: 1) low-quality detail-recovery results due to information loss; 2) complex and even redundant model structure. In this paper, we therefore propose a simple yet effective deep LLIE architecture, termed Full-Resolution Context Network (FRC-Net). To avoid the visual information loss caused by feature scaling, we present a novel full-resolution representation strategy to replace all feature scaling operations, which can prevent the information degradation by making the intermediary features keep the original resolution. The structure of FRC-Net is very simple, which only contains 12 cascaded layers: 7 convolutional layers and 5 newly-designed context attention (CA) modules. The plug-and-play CA module is designed to overcome the limited receptive field caused by shallow structures by learning global context as well as retaining local details. Extensive experiments show that our model obtains better detail-recovery quality over current SOTA methods, with relatively fewer parameters and faster inference speed.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.