Abstract

The use of conventional classification techniques to recognize diseases and pests can lead to an incorrect judgment on whether crops are diseased or not. Additionally, hot pepper diseases, such as “anthracnose” and “bacterial spot” can be erroneously judged, leading to incorrect disease recognition. To address these issues, multi-recognition methods, such as Google Cloud Vision, suggest multiple disease candidates and allow the user to make the final decision. Similarity-based image search techniques, along with multi-recognition, can also be used for this purpose. Content-based image retrieval techniques have been used in several conventional similarity-based image searches, using descriptors to extract features such as the image color and edge. In this study, we use eight pre-trained deep learning models (VGG16, VGG19, Resnet 50, etc.) to extract the deep features from images. We conducted experiments using 28,011 image data of 34 types of hot pepper diseases and pests. The search results for diseases and pests were similar to query images with deep features using the k-nearest neighbor method. In top-1 to top-5, when using the deep features based on the Resnet 50 model, we achieved recognition accuracies of approximately 88.38–93.88% for diseases and approximately 95.38–98.42% for pests. When using the deep features extracted from the VGG16 and VGG19 models, we recorded the second and third highest performances, respectively. In the top-10 results, when using the deep features extracted from the Resnet 50 model, we achieved accuracies of 85.6 and 93.62% for diseases and pests, respectively. As a result of performance comparison between the proposed method and the simple convolutional neural network (CNN) model, the proposed method recorded 8.62% higher accuracy in diseases and 14.86% higher in pests than the CNN classification model.

Highlights

  • Using the abovementioned research design, we applied the deep features extracted from eight pre-trained models to the image search of hot pepper diseases and pests to measure the recognition accuracy

  • The experimental results indicate that the Resnet 50, VGG16, and VGG19 models recorded the highest search accuracy for the disease and pest images used in the study

  • Because the deep features were extracted from the pre-trained models trained on big data, such as ImageNet, there was no need for a model training process

Read more

Summary

Motivation

The hot pepper (Capsicum annuum) is an essential vegetable globally. The FAO (2018) indicates that its production (item; “Chilles and pepper, green”) in the world has steadily increased to approximately. The incorrect diagnosis and prescription of diseases and pests can result in crop damage To address these issues, multi-recognition methods, such as Google Cloud Vision, suggest multiple candidates and allow users to make the final decision. Owing to the limitation of descriptors in extracting features, the search accuracy of diseases and pests is relatively low compared with that of conventional deep learning-based recognition models. Conventional deep learning models require a large amount of data and a lot of time and effort to train [3] Issues such as overfitting can arise [4,5]. To address the lack of disease and pest image data, we extracted the deep features of disease and pest images using pre-trained models

Single Recognition
Transfer Learning and Pre-Trained Models
Multi-Recognition and Similarity-Based
Summary
Materials and Methods
Deep Feature Extraction Using Pre-Trained Models
Similar Image Search Using KNN
Algorithm
Experimental Process
Performance Evaluation of Pre-Trained Models
Results and Discussion
Proposed Method
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call