Abstract

Since appearances of clouds are always changeable, ground-based cloud classification is still in urgent need of development in weather station networks. Many existing methods resort to convolutional neural networks to improve the classification accuracy. However, these methods just carry out the feature extraction from one convolutional layer, hence making it difficult to obtain complete information of ground-based cloud images. To address this limitation, in this paper, we propose a novel method named salient dual activations aggregation (SDA2) to extract ground-based cloud features from different convolutional layers, which could learn the structural, textural, and high-level semantic information for ground-based cloud representation, simultaneously. Specifically, the salient patch selection strategy is first applied to select salient vectors from one shallow convolutional layer. Then, corresponding weights are learned from one deep convolutional layer. After obtaining a set of salient vectors with various weights, this paper is further designed to aggregate them into a representative vector for each ground-based cloud image by explicitly modeling the relationship among salient vectors. The proposed SDA2 is validated on three ground-based cloud databases, and the experimental results prove its effectiveness. Especially, we obtain the promising classification results of 91.24% on the MOC_e database, 91.15% on the IAP_e database, and 88.73% on the CAMS_e database.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call