Abstract

Recently, deep-learning-based low-light video enhancement methods have drawn wide attention and achieved remarkable performance. However, limited by the difficulty in collecting dynamic low-light and well-lighted video pairs in real scenes, how to construct video sequences for supervised learning and design a low-light enhancement network for real dynamic video remains a challenge. In this paper, we propose a simple yet effective low-light video enhancement method (LVE-S2D), which generates dynamic video training pairs from static videos, and enhances the low-light video by mining dynamic temporal information. To obtain low-light and well-lighted video pairs, a sliding window-based dynamic video generation mechanism is designed to produce pseudo videos with rich dynamic temporal information. Then, a siamese dynamic low-light video enhancement network is presented, which effectively utilizes temporal correlation between adjacent frames to enhance the video frames. Extensive experimental results demonstrate that the proposed method not only achieves superior performance on static low-light videos, but also outperforms the state-of-the-art methods on real dynamic low-light videos.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call