Computer vision systems in outdoor environments are strongly affected by different atmospheric/weather conditions. Therefore, understanding the actual behavior of outdoor scenes is necessary for effective removal and improvement of the overall performance of computer vision systems. Although the classification of atmospheric/weather conditions has been well explored, reporting on the same in multiclass problem using Convolutional Neural Networks (CNNs) has received very little attention. In response to address this disparity, we propose a new CNN architecture named the “Adversarial Weather Degraded Multi-class scenes Classification Network (AWDMC-Net)” for outdoor scene classification degraded by different atmospheric/weather conditions. The proposed network is based on adopting different combinations of skip connections in building blocks of CNN there after adaptively pruning the least important convolutional kernels from the network. For effective pruning, we proposed a new pruning criterion named “Entropy Guided Mean-l1 Norm” that can adaptively evaluate the importance of convolutional kernels by considering the filters and their corresponding output feature maps. The prediction performance of our proposed model was evaluated on our newly designed E-TUVD (Extended Tripura University Video Dataset) and on publicly available benchmark datasets. Our newly created video dataset, E-TUVD, consists of 147 video clips (approximately 793800 frames) that represent six atmospheric/weather conditions, namely, fog, dust, rain, haze, poor illumination, and clear day conditions. Our proposed model achieves an accuracy of 93.85%, a specificity of 93.79%, and a sensitivity of 94.18% on our dataset, which outperforms the prevailing standard CNN models and recent state-of-the-art methods for atmospheric/weather classification tasks. Furthermore, our network also reduces the time consumption for atmospheric/weather classification tasks, and therefore mostly meets the requirements of practical applications in real-world scenarios.
Read full abstract