Abstract

Categorizing flowers is quite a challenging task as there is so much diversity in the species, and the images of the different flower species could be pretty similar. Flower categorization involves many issues like low resolution and noisy images, occluded images with the leaves and the stems of the plants and sometimes even with the insects. The traditional handcrafted features were used for extraction of the features and the machine learning algorithms were applied but with the advent of the deep neural networks. The focus of the researchers has inclined towards the use of the non-handcrafted features for the image categorization tasks because of their fast computation and efficiency. In this study, the images are pre-processed to enhance the key features and suppress the undesired information’s and the objects are localized in the image through the segmentation to extract the Region of Interest, detect the objects and perform feature extraction and the supervised classification of flowers into five categories: daisy, sunflower, dandelion, tulip and rose. First step involves the pre-processing of the images and the second step involves the feature extraction using the pre-trained models ResNet50, MobileNet, DenseNet169, InceptionV3 and VGG16 and finally the classification is done into five different categories of flowers. Ultimately, the results obtained from these proposed architectures are then analyzed and presented in the form of confusion matrices. In this study, the CNN model has been proposed to evaluate the performance of categorization of flower images, and then data augmentation is applied to the images to address the problem of overfitting. The pre-trained models ResNet50, MobileNet, DenseNet169, InceptionV3 and VGG16 are implemented on the flower dataset to perform categorization tasks. The pre-trained models are empirically implemented and assessed on the various flower datasets. Performance analysis has been done in terms of the training, validation accuracy, validation loss and training loss. The empirical assessment of these pre-trained models demonstrate that these models are quite effective for the categorization tasks. According to the performance analysis, the VGG16 outperforms all the other models and provides a training accuracy of 99.01%. Densenet169 and MobileNet also give comparable validation accuracy. ResNet50 gives the lowest training accuracy of 60.46% as compared with the rest of the pre-trained replica or models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call