Abstract

Low-light enhancement is a crucial task that aims to enhance the under-exposed input in computer vision. While state-of-the-art static single-image enhancement methods have made remarkable progress, yet, few attempts are explored the spatial-temporal sequence problem in low-light video enhancement. In this paper, we propose a simple yet highly effective method, termed as Adaptive Locally-Aligned Transformer (ALAT) for low-light video enhancement based on visual transformers. ALAT consists of three parts: feature encoder, locally-aligned transformer block (LATB) and pyramid feature decoder. Specifically, the transformer block enables the network to model the long-range spatial and appearance dependencies in videos due to its self-attention parallel computing mechanism. However, different from some previous approaches directly using the vanilla transformer, we consider that locality is significant in low-level vision tasks since the misaligned contextual local features (i.e., edges, shapes) may affect the prediction quality. Therefore, the proposed LATB is designed to align the video pixel with its most relevant ones adaptively in the local region to preserve the regional content information. Furthermore, we publish a new real-world low-light video dataset, named ExpressWay, to fill the gaps in the lack of dynamic low-light video scenarios, which contains high-quality videos with moving objects in both dark- and bright-light conditions. We conduct experiments on five benchmarks under three comprehensive settings including synthesized, static and our proposed dynamic low-light video datasets. Extensive experimental results show that our ALAT can outperform the previous state-of-the-arts by a large margin of 0.20∼1.10dB. Our method can be also extended to other video enhancement applications. The project is available at https://github.com/y1wencao/LLVE-ALAT.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.