Abstract

Beans are essential crops globally, standing out as one of the most consumed and nourishing legumes, thereby playing a significant role in human nutrition and food security. Their cultivation faces several challenges, such as pests, diseases, unpredictable weather patterns, and soil erosion. Of these challenges, diseases are recognized as a key challenge, resulting in a decline in both yield quality and quantity, and inflicting substantial financial losses on farmers.This work proposes a deep learning-based approach for precise in-field identification of diseases in bean plants. We evaluate image classification and object detection models using state-of-the-art Convolutional Neural Network (CNN) architectures to identify Angular Leaf Spot (ALS) and bean rust diseases, key bean diseases in Uganda and the region in general, from smartphone images of bean leaves collected in various districts of Uganda.The dataset employed to train these models is the Makerere University beans image dataset, comprising 15,335 images categorized into three (3) classes (ALS, bean rust, and healthy). To improve in-field performance, the dataset was expanded to include an additional class (unknown class) consisting of a diverse collection of 2,800 images to account for images unrelated to the three (3) predefined classes. Adversarial training was further employed to enhance model robustness in identifying the target classes. In addition, two (2) Out-of-Distribution (ODD) detection techniques, i.e., confidence thresholding and training with an auxiliary class (unknown class), were utilized to handle inputs unrelated to bean leaves.Our results show that our custom CNN, BeanWatchNet, achieved 90% accuracy when tested on unseen data for the classification of the three (3) target classes, i.e., ALS, bean rust and healthy. EfficientNet v2 B0 and BeanWatchNet demonstrated superior performance for the four-class (with unknown class) image classification task, achieving 91% and 90% accuracy, respectively, when evaluated on the test dataset. YOLO v8 exhibited superior performance for the object detection models, attaining mAP@50 of 87.6. The custom CNN model and YOLO v8 model were quantized and deployed across two (2) edge platforms: a smartphone (through a mobile application) and a Raspberry Pi 4B to facilitate in-field disease detection. The benchmarking code and best models are publicly available on GitHub.1

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.