Abstract

Low-light image enhancement has attracted great attention due to its capability in providing visible content and textures for video security and forensics. Unfortunately, conventional Retinex-based methods hardly pay attention to the complementarity of illumination and reflection. Inspired by the observation, we propose a new low-light image enhancement framework which relights the restored low-light image based on complementarity of illumination and reflection, named R2Net. The R2Net basically consists of three parts: image decomposition (Decom Net), reflection restoration (Restore Net), and illumination enhancement (Relight Net). In the Restore Net, we propose a mixed twofold attention (MTFA) module with linear complexity, introducing illumination information to model the mutual relationship between illumination and reflection. In MTFA, in order to capture richer information, we first map the inputs into different feature spaces, followed by combining them in different orders to obtain multiple sets of enhanced features. Accordingly, we design a differential enhanced fusion module (DEFM) to mix the multiple features. Ablation studies prove that the MTFA module significantly improves the performance of the restoration network. Finally, we propose a new illumination enhancement network (Relight Net), in which we introduce reflection to generate global information and define a luminance factor to tune the exposure level of output images, making our model more robust and flexible. Experiments show that our proposed method outperforms state-of-the-art methods in both quantitative comparison and visual perception.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call