Abstract

Previous coarse-to-fine strategies typically spend equal effort in feature extraction and feature reconstruction, and gradually improve the brightness of images from bottom to top, resulting in computational resources not being well consumed for restoration. In this paper, we propose a new deep framework for Robust and Fast Low-Light Image Enhancement, dubbed RFLLIE. Specifically, we first use a lightweight CNN encoder consisting of a few convolutional layers and pooling layers to form a feature pyramid for restoration. Then, a coarse-to-fine recovery module, which consists of cascaded depth blocks and well-designed spatial attention layers as well as progressive dilation Resblocks, is proposed for feature aggregation and global-to-local restoration. As such, our RFLLIE is formed as a light-head and heavy-tail architecture that focuses more on feature reconstruction rather than extraction. Additionally, we propose a decomposition-guided restoration loss based on the Retinex theory that adopts the “enhancement before decomposition” strategy instead of the commonly used “decomposition before enhancement” to further improve the contrast and suppress noise. Extensive experiments demonstrate that our method outperforms the existing state-of-the-art methods both quantitatively and visually, and achieves a better trade-off between performance and efficiency. Our code will be available at https://github.com/JianghaiSCU/RFLLIE.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call