Abstract

This study implements Vision Transformer 16x16 Words model for weather images classification. Its performance is compared with other traditional convolutional neural network (CNN) architectures, namely EfficientNetB2, DenseNet201, EfficientNetB7 and MobileNetV2. These models are implemented by transfer learning techniques for classification of images. In order to ensure the comparative performance, the same hyper-parameters of their models, such as dropout rate, optimizer and learning rate are employed identically. Furthermore, the same dataset on weather image phenomena applied on all those models with the same training, validation and testing dataset of weather images classification. The dataset of 11 different image classes that are collected from different resources of weather images with various kinds of weather phenomena are employed. The test results of performance show that the Vision Transformer gives the best results at 86.20%, which is suitable for application in evaluating weather images classification problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call