Abstract

Nowadays, researchers use vision-based measurement tools to record, detect, and monitor an atmospheric phenomenon called haze. It impedes the proper functioning of many outdoor industrial systems, such as autonomous driving, surveillance, satellite imagery, etc. Conventional visibility restoration methods cannot accurately recover image quality due to inaccurate estimations of haze thickness and the presence of color-cast effects. Deep neural networks are evolving due to their ability to directly dehaze images from hazy scenes. Therefore, a unique attention-based end-to-end dehazing network named Oval-Net has been proposed in this study to restore clear images from its counterpart without employing the atmospheric scattering model. The Oval-Net is an encoder-decoder architecture that uses spatial and channel attention at each stage to focus on dominant and significant information while avoiding the transmission of irrelevant information from the encoder to the decoder, allowing quicker convergence. The proposed approach outperforms seven state-of-the-art algorithms in quantitative and qualitative assessments of a variety of synthetic and real-world hazy images, proving its effectiveness for vision-based industrial systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call