Abstract

Visibility is a measure of the transparency of the atmosphere, which is an important factor for road, air, and water transportation safety. Recently, features extracted from convolutional neural networks (CNNs) have obtained state-of-the-art results for the estimation of the visibility range for images of foggy weather. However, existing CNN-based approaches have only adopted visible images as observational data. Unlike these previous studies, in this paper, visible-infrared image pairs are used to estimate the visibility range. A novel multimodal deep fusion architecture based on a CNN is then proposed to learn the robust joint features of the two sensor modalities. Our network architecture is composed of two integrated residual network processing streams and one CNN stream, which are connected in parallel. In addition, we construct a visible-infrared multimodal dataset for various fog densities and label the visibility range. We then compare our proposed method with conventional deep-learning-based approaches and analyze the contributions of various observational and classical deep fusion models to the classification of the visibility range. The experimental results demonstrate that both accuracy and robustness can be strongly enhanced using the proposed method, especially for small training datasets.

Highlights

  • Visibility is defined as the furthest distance at which a black object of suitable dimensions situated near the ground can be recognized when observed against the background [1]

  • EXPERIMENTAL RESULTS We conducted experiments to evaluate the effectiveness of the proposed multimodal visibility range estimate method by comparing it with four conventional visibility deep learning models (Hazar et al [14], Li et al [13], convolutional neural networks (CNNs)-recurrent neural network (RNN) [23], and VisNet [25]) and two different multimodal fusion models including signal level models (i.e., RGB-IR 4Ch & AlexNet [22]) and feature level models (Eitel et al [31]), the structures of which were summarized in Fig.6, and two CNN-based object classification models (ResNet [26] and VGG [32])

  • It illustrates that under foggy weather, deep feature maps learned from infrared images are much more effective than those features learned from visible images

Read more

Summary

Introduction

Visibility is defined as the furthest distance at which a black object of suitable dimensions situated near the ground can be recognized when observed against the background [1]. Changes in visibility depend on atmospheric transparency. Adverse weather phenomena can cause the atmosphere to be turbid and reduce transparency. In the presence of fog, haze, or air pollution, the visibility distance can be reduced dramatically [2]. The behavior of drivers in fog can often be inappropriate (e.g. reduced attention and altered reaction times), though the reasons for these dangerous actions are not fully understood [3], [4]. Haze is a meteorological condition that makes flying difficult, affecting take-off and landing. This has a negative economic effect on airlines and airports due to delays and cancellations and affects public travel [5]

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call