HighlightsImage classification was used to identify tar spot, NLB, GLS, and NLS corn disease with 99.51% testing accuracy.YOLOv7 object detection was used to locate and identify tar spot lesions on infected leaves with mAP@0.5 of 40.46%.A novel tar spot severity estimation approach was developed using deep learning and color histogram thresholding.A web-based disease diagnosis tool was developed to help with an accurate, in-field diagnosis of tar spot in corn.Abstract. Management of tar spot disease in corn has traditionally relied on manual field scouting and visual analysis since it was first observed in the US in 2015. The disease has been identified using deep learning (DL) models as the application of computer vision and DL techniques for disease management is increasing. The severity of the disease has been estimated using close-range images of infected corn leaves under lab conditions with uniform backgrounds. However, DL models trained using images acquired under uniform lab conditions are limited in their ability to generalize to field conditions. Although recent studies have shown success in quantifying the disease, its analysis under field conditions with complex backgrounds to provide a field-ready solution in the form of an application has not yet been developed. Therefore, this study acquired a custom handheld imagery dataset of 455 images for the tar spot disease in a greenhouse with noisy backgrounds to simulate field conditions for training DL-based disease identification models. The dataset was combined with a publicly available Corn Disease and Severity (CD&S) dataset consisting of field-acquired diseased images corresponding to Northern Leaf Blight (NLB), Gray Leaf Spot (GLS), and Northern Leaf Spot (NLS). Image classification models were first trained to accurately identify tar spot, NLB, GLS, and NLS diseases. To accurately locate and identify tar spot disease lesions, YOLOv7 object detection models were then trained. In addition, semantic segmentation models were trained using the UNet architecture for leaf and lesion segmentation. After training the image classification models, the highest testing accuracy of 99.51% was achieved with the InceptionV3 model. For YOLOv7 object detection, the highest mAP of 40.46% was achieved for locating and identifying tar spot disease lesions on infected leaves. Finally, for UNet semantic segmentation, the mIoU for leaf and lesion segmentation were 0.80 and 0.28, respectively. Therefore, traditional color histogram thresholding was used for segmenting the tar spot lesions, and DL techniques were used to develop a novel severity estimation framework. After evaluating the models, the image classification model was deployed on a progressive web application accessible through a smartphone to enable real-time analysis. In this study, different DL models were trained and evaluated for tar spot disease identification and its severity estimation. In addition, a smartphone-based disease diagnosis tool was developed that has the potential for an accurate, in-field diagnosis of tar spot in corn. Keywords: Computer vision, Deep learning, Disease identification, Image classification, Precision agriculture, Severity estimation, Tar spot.
Read full abstract