Abstract

Low-light image enhancement studies how to improve the quality of images captured under poor lighting conditions, which is of real-world importance. Currently, convolutional neural network (CNN)-based methods with state-of-the-art performance have become the mainstream of research. However, most CNN-based methods improve the performance of the algorithm by increasing the width and depth of the neural network, which requires large computing device resources.In this paper, we propose a knowledge distillation method for low light image enhancement. The proposed method uses a teacher-student framework in which the teacher network tries to transfer the rich knowledge to the student network. The student network learns the knowledge of image enhancement under the supervision of ground truth images and under the guidance of the teacher network simultaneously. Knowledge transfer between the teacher-student network is accomplished by distillation loss based on attention maps. We designed a gradient-guided low-light image enhancement network that can be divided into an enhancement branch and a gradient branch, where the enhancement branch is learned under the guidance of the gradient branch to better preserve structural information. The teacher and student networks use a similar structure, but they have different model sizes. The teacher network has more parameters and more powerful learning capabilities than the student network. With the help of knowledge distillation, our approach can improve the performance of the student network without increasing the computational burden during the testing phase. The qualitative and quantitative experimental results demonstrate the superiority of our method compared to the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call