Abstract

A great many low-light image restoration methods have built their models according to Retinex theory. However, most of these methods cannot well achieve image detail enhancement. To achieve simultaneous restoration and enhancement, we study deep low-light image enhancement from a perspective of texture-structure decomposition, that is, learning image smoothing operator. Specifically, we design a low-light restoration and enhancement framework, in which a Deep Texture-Structure Decomposition (DTSD) network is introduced to estimate two complementary constituents: Fine-Texture (FT) and Prominent-Structure (PS) maps from low-light image. Since these two maps are leveraged to approximate FT and PS maps obtained from normal-light image, they can be combined as the restored image in a manner of pixel-wise addition. The DTSD network has three parts: U-attention block, Decomposition-Merger (DM) block, and Upsampling-Reconstruction (UR) block. To better explore multi-level informative features at different scales than U-Net, U-attention block is designed with intra group and inter group attentions. In the DM block, we extract high-frequency and low-frequency features in low-resolution space. After obtaining informative feature maps from these two blocks, these maps are fed into the UR block for the final prediction. Numerous experimental results have demonstrated that the proposed method can achieve simultaneous low-light image restoration and enhancement, and it has superior performance against many state-of-the-art approaches in terms of several objective and perceptual metrics.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call