Abstract

Haze significantly lowers the quality of the photos and videos that are taken. This might potentially be dangerous in addition to having an impact on the monitoring equipment' dependability. Recent years have seen an increase in issues brought on by foggy settings, necessitating the development of real-time dehazing techniques. Intelligent vision systems, such as surveillance and monitoring systems, rely fundamentally on the characteristics of the input pictures having a significant impact on the accuracy of the object detection. This paper presents a fast video dehazing technique using Generative Adversarial Network (GAN) model. The haze in the input video is estimated using depth in the scene extracted using a pre trained monocular depth ResNet model. Based on the amount of haze, an appropriate model is selected which is trained for specific haze conditions. The novelty of the proposed work is that the generator model is kept simple to get faster results in real-time. The discriminator is kept complex to make the generator more efficient. The traditional loss function is replaced with Visual Geometry Group (VGG) feature loss for better dehazing. The proposed model produced better results when compared to existing models. The Peak Signal to Noise Ratio (PSNR) obtained for most of the frames is above 32. The execution time is less than 60 milli seconds which makes the proposed model suited for video dehazing.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.