Abstract

Low-light images captured under low light or backlight conditions can suffer from different types of degradation such as low visibility, strong noise and color distortion. In this paper, to solve the degradation problem of low-light images, we propose Two-stage Perceptual Enhancement Transformer Network(TPET) for Low-light Image Enhancement by combining the advantages of local spatial perception of convolutional neural network and global spatial perception of transformer. The method is generally divided into two stages: feature extraction stage and detail fusion stage. First, in the feature extraction stage, the encoder composed of transformers performs global feature extraction and expands the receptive field. Since the transformer lacks the ability to capture local features, we introduce a perceptual enhancement module (PEM) to improve the interaction of local and global feature information. Second, between the corresponding encoding and decoding blocks in each layer, a feature fusion block (FFB) is introduced to compensate the feature information at different scales to improve the reusability of features and enhance the stability of the network. In addition, between the two stages, the local information features are redistributed and the network supervision capability is improved by introducing a self-calibration module (SCM). In the detail fusion stage, in order to further preserve the details of textural features of the image, we designed a detail enhancement unit (DEU) for recovering high-resolution enhanced images. Through qualitative comparison and quantitative analysis, our method outperforms other low-light image enhancement methods in terms of subjective visual effects and objective metrics values.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call