Abstract

Images collected in low-light environments usually suffer from multiple, non-uniform distributed distortions, including local dark, dim light, backlit and so on. In this paper, we propose a Stage-Transformer-Guided Network (STGNet) that effectively handles region-specific distributions and enhance diverse low-light images. Specifically, our STGNet adopts a multi-stage way to progressively learn hierarchical features that benefit the robustness of our model. At each stage, we design an efficient transformer with horizontal and vertical attentions that jointly capture degradation distributions with different magnitudes and orientations. We also introduce learnable degradation queries to adaptively select task-specific features of degradations for enhancement. In addition, we design a histogram loss for enhancement and combine it with other loss functions, in order to exploit both global contrast and local details during network training. Benefiting from the above contributions, our STGNet achieves the state-of-the-art performances on both synthetic and real-world datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call