Abstract

Images and videos shot in low light are often accompanied by severe image degradation, such as color noise, chromatic aberrations and loss of details. Most existing convolutional neural network (CNN)-based low-light enhancement methods focus on decomposing the image into illumination and reflection parts via the Retinex model, but these methods often fail to adequately consider controlling noise during enhancement and perform poorly in the face of complex lighting environments. In this letter, we propose a powerful Vision Transformer-based Generative Adversarial Network (Transformer-GAN) for enhancing low-light images. Transformer-GAN consists of two subnets as follows: (1) the feature extraction is achieved by an iterative multi-branch network in the feature extraction subnet, and (2) the enhancement is completed in the image reconstruction subnet. The innovative core works are multi-head multi-covariance self-attention (MHMCA) and Light feature-forward module structures (LFFM) in Transformer-GAN. Experiments demonstrate that our method outperforms state-of-the-art low-light enhancement methods on popular low-light datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call