Abstract

Global structure and local detailed texture have different effects on image enhancement tasks. However, most existing works treated these two components in the same way, without fully considering the characteristics of the global structure and local detailed texture. In this work, we propose a structure-texture aware network (STANet) that successfully exploits structure and texture features of low-light images to improve perceptual quality. To construct STANet, a fine-scale contour map guided filter is introduced to decompose the image into a structure component and a texture component. Then, structure-attention and texture-attention subnetworks are designed to fully exploit the characteristics of these two components. Finally, a fusion subnetwork with attention mechanisms is utilized to explore the internal correlations among the global and local features. Furthermore, to optimize the proposed STANet model, we propose a hybrid loss function; specifically, a color loss function is introduced to alleviate color distortion in the enhanced image. Extensive experiments demonstrate that the proposed method improves the visual quality of images; moreover, STANet outperforms most other state-of-the-art approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call