Abstract

Images captured in low-light environments suffer from serious degradation due to insufficient light, leading to the performance decline of industrial and civilian devices. To address the problems of noise, chromatic aberration, and detail distortion for enhancing low-light images using existing enhancement methods, this paper proposes an integrated learning approach (LightingNet) for low-light image enhancement. The LightingNet consists of two core components: 1) the complementary learning sub-network and 2) the vision transformer (VIT) low-light enhancement sub-network. VIT low-light enhancement sub-network is designed to learn and fit the current data to provide local high-level features through a full-scale architecture, and the complementary learning sub-network is utilized to provide global fine-tuned features through learning transfer. Extensive experiments confirm the effectiveness of the proposed LightingNet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call