Abstract
Image-based quantified visibility estimation is an important task for both atmospheric science and computer vision. Traditional methods rely largely on meteorological observation or manual camera calibration, which restricts its performance and generality. In this paper, we propose a new end-to-end pipeline for single image-based quantified visibility estimation by an elaborate integration between meteorological physical constraint and deep learning architecture design. Specifically, the proposed Deep Quantified Visibility Estimation Network (abbreviated as DQVENet) consists of three modules, i.e., the Transmission Estimation Module (TEM), the Depth Estimation Module (DEM), and the Extinction coEfficient Estimation Module (E3M). Casting on these modules, the meteorological prior constraint can be combined with deep learning. To validate the performance of DQVENet, this paper also constructs a traffic image dataset (named QVEData) with accurate visibility calibration. Experimental results compared with many state-of-the-art methods on QVEData demonstrate the effectiveness and superiority of DQVENet.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.