Abstract
Image dehazing under deficient data is an ill-posed and challenging problem. Most existing methods tackle this task by developing either CycleGAN-based hazy-to-clean translation or physical-based haze decomposition. However, geometric structure is often not effectively incorporated in their straightforward hazy-clean projection framework, which might incur inaccurate estimation in distant areas. In this paper, we rethink the image dehazing task and propose a depth-aware perception framework, DehazeDP, for robust haze decomposition on deficient data. Our DehazeDP is insthe pired by Diffusion Probabilistic Model to form an end-to-end training pipeline that seamlessly ines the hazy image generation with haze disentanglement. Specifically, in the forward phase, the haze is added to a clean image step-by-step according to the depth distribution. Then, in the reverse phase, a unified U-Net is used to predict the haze and recover the clean image progressively. Extensive experiments on public datasets demonstrate that the proposed DehazeDP performs favorably against state-of-the-art approaches. We release the code and models at https://github.com/stallak/DehazeDP.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have