Abstract

Recovering normal-exposure images from low-light images is a challenging task. Recent works have built a great deal of deep learning methods to address this task. Nevertheless, most of them treat cartoon and texture components in the same way, resulting in a loss of details. Recent effort, i.e. unfolding total variation network (UTVNet), is proposed, which recovers normal-light image by roughly decomposing the image into a noise-free smoothing layer and a detail layer using total variation (TV) regularization, and then processes the two components in different ways. However, its enhanced image exhibits color distortion owing to the limited representation ability of the TV model. To address this limitation, we design a cartoon-texture guided network named CatNet for low-light image enhancement. CatNet uses a cartoon-guided normalizing flow to retain cartoon information and an elaborated frequency domain attention mechanism in U-Net denoted as FAU-Net to recover texture information. Concretely, the ground-truth image is decomposed into cartoon and texture components to guide the corresponding recovery modules training, respectively. We also design a hybrid loss in the spatial and frequency domains to train the CatNet. Compared to state-of-the-art methods, our method gets better results, obtaining richer colors and more details. The source code and datasets have been made publicly available at https://github.com/shibaoshun/CatNet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call