Abstract
The low-light image enhancement task aims to improve the visibility of information in the dark to obtain more data and utilize it, while also improving the visual quality of the image. In this paper, we propose a dual cross-attention multi-stage embedding network (DCMENet) for fast and accurate enhancement of low-light images into high-quality images with high visibility. The problem that enhanced images tend to have more noise in them, which affects the image quality, is improved by introducing an attention mechanism in the encoder–decoder structure. In addition, the encoder–decoder can focus most of its attention on the dark areas of the image and better attend to the detailed features in the image that are obscured by the dark areas. In particular, the poor performance of the Transformer when the dataset size is small is solved by fusing the CNN-Attention and Transformer in the encoder. Considering the purpose of the low-light image enhancement task, we raise the importance of recovering image detail information to the same level as reconstructing the lighting. For features such as texture details in images, cascade extraction using spatial attention and pixel attention can reduce the model complexity while the performance is also improved. Finally, the global features obtained by the encoder–decoder are fused into the shallow feature extraction structure to reconstruct the illumination while guiding the network for the focused extraction of information in the dark. The proposed DCMENet achieves the best results in both objective quality assessment and subjective evaluation, while for the computer vision tasks working in low-light environments as well, the enhanced images using the DCMENet proposed in this paper show the best performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.