Abstract

In this paper, we propose a novel dehazing method based on self-distillation. In contrast to conventional knowledge distillation approaches that transfer large models (teacher networks) to small models (student networks), we introduce a single knowledge distillation network that transfers network parameters to itself for dehazing. In the early stages, the proposed network transfers scene content (identity) information to the next stage of itself using haze-free data. However, in the later stages, the network transfers haze information to itself using haze data, enabling the accurate dehazing of input images using scene information from the early stages. In a single network, parameters are seamlessly updated from extracting global scene features to dehazing the scene. During the training, forward propagation acts as a teacher network, whereas backward propagation acts as a student network. The experimental results demonstrate that the proposed method considerably outperforms other state-of-the-art dehazing methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.