Abstract

Identification of plant disease is affected by many factors. The scarcity of rare or mild symptoms, the sensitivity of segmentation is influenced by light and shadow of images capturing conditions, and symptoms characteristics are represented by multiple lesions of varied colours on the same leaf at different stages of infection. Traditional approaches face several problems: contrast handling leads to mild symptoms being undetected and deals with edges results in curved surfaces and veins being considered new regions of interest. Thresholding of segmentation restricts it to a specific range of values, which prevents it from dealing with an entire area (healthy, injured, or noise). Deep learning approaches also face problems of dealing with imbalanced datasets. The existence of overlapped symptoms on the same leaf sample is rare. Most deep models detect a single type of lesion at a single time. Masks with a single type of infection are used for training these models that lead to misclassification. Manual annotation of symptoms is considered time-consuming. Therefore, the proposed framework in this study is an attempt to overcome certain drawbacks of traditional segmentation approaches to generate masks for deep disease classification models. The main objective is to label datasets based on a semi-automated segmentation of leaves and disordered regions. There is no need to manage contrast or apply filters that keep lesion characteristics unchanged. As a result, every pixel in the predetermined lesions is selected accurately. The approach is applied to three different datasets with single and multiple infections. The obtained overall precision is 90%. The average intersection over the union of the injured regions is 0.83. The brown and the dark brown lesions are more accurately segmented than the yellow lesions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call