Abstract

Haze removal is still an essential prerequisite for image processing and computer vision tasks, and joint inference and refinement of transmission maps remain challenging in the physical scattering model-based haze removal methods. In this article, we propose an end-to-end learnable dehazing network, which is referred to as Guided-Pix2Pix, to jointly estimate and refine the transmission map and further dehaze images by the physical scattering equation. Instead of a two-stage model of predicting and postprocessing the transmission, Guided-Pix2Pix concatenates the trainable Pix2Pix backbone and differentiable guided filter as an embedded layer, which enables generating refined transmission maps in one feed-forward step, and then it substitutes these potential refinements into the physical scattering equation to restore dehazed images. To verify that our Guided-Pix2Pix can be embedded in both training and inference, we demonstrate that the guided filter layer is differentiable and capable of propagating both features forward and gradients backward. Furthermore, explicit derivatives with respect to the input of the guided filter are given, and the relationship between our derivation and that in the guided filter is also explored. Experiments show that our network is effective and robust in image dehazing, can alleviate the halo artifacts along edges, and has great generalization capability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call