Abstract

This work addresses the problem of single image dehazing particularly towards the goal of better visibility restoration. Athough extensive studies have been performed, almost all of them are heavily built on the atmospheric scattering model. What is worse, they usually fail to restore the visibility of dense hazy images convincingly. Inspired by the potentials of deep learning, a new end-to-end approach is presented to restore a clear image directly from a hazy image, while with an emphasis on the real-world weather conditions. In specific, an Encoder-Decoder is exploited as a generator for restoring the dehazed image in an attempt of preserve more image details. Interestingly, it is further found that the performance of the Encoder-Decoder can be largely boosted via our advocated dual principles of discriminativeness in this paper. On the one hand, the dark channel is re-explored in our framework resulting in a discriminative prior formulated specifically for the dehazing problem. On the other hand, a critic is incorporated for adversarial training against the autoencoding-based generator, implemented via the Wasserstein GAN (generative adverarial networks) regularized by the Liptchitiz penalty. The proposed approach is trained on a synthetic dataset of hazy images, while evaluated on both synthetic and real hazy images. The objective evaluation has shown that the proposed approach performs competitively with the state-of-the-art approaches, but outperforms them in terms of the visibility restoration especially in the scenarios of dense haze.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call