Abstract

Existence of haze significantly degrades visual quality and hence negatively affects the performance of visual surveillance, video analysis, and human–machine interaction. To remove haze from a visual signal, in this paper, we propose a generative adversarial network for visual haze removal called HRGAN. HRGAN consists of a generator network and a discriminator network. A unified network jointly estimating transmission maps, atmospheric light, and haze-free images (called UNTA) is proposed as the generator network of HRGAN. Instead of being optimized by minimizing the pixel-wise loss, HRGAN is optimized by minimizing a novel loss function consisting of pixel-wise loss, perceptual loss, and adversarial loss produced by a discriminator network. Classical model-based image dehazing algorithms consist of three separate stages: 1) estimating transmission map; 2) estimating atmospheric light; and 3) restoring haze-free image by using an atmospheric scattering model to process the transmission map and atmospheric light. Such a separate scheme is not guaranteed to achieve optimal results. On the contrary, UNTA performs transmission map estimation and atmospheric light estimation simultaneously to obtain joint optimal solutions. The experimental results on both synthetic and real-world image databases demonstrate that HRGAN outperforms the state-of-the-art algorithms in terms of both effectiveness and efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call