Abstract

Research on the recognition and segmentation of vegetable diseases in simple environments based on deep learning has achieved a relative success. However, in complex environments, the image background often contains elements similar to the representation of leaves and disease spots, making it difficult for the recognition model to segment leaves and disease spots. Consequently, the segmentation precision is significantly reduced, which further affects the accuracy of disease severity classification. In response to this problem, while discussing and analyzing the advantages and disadvantages of DeepLabV3+ and U-Net, this study proposed a two-stage model that fuses DeepLabV3+ and U-Net for cucumber leaf disease severity classification (DUNet) in complex backgrounds. In the first stage, this model uses DeepLabV3+ to segment leaves from complex backgrounds. The images of leaves obtained after segmentation are used as the input for the second stage. In the second stage, U-Net is used to segment the diseased leaves to obtain disease spots. Finally, the ratio of the pixel area of disease spots over the pixel area of leaves is calculated so as to classify the disease severity. The experiment results show that the proposed model is able to segment leaves and disease spots from complex backgrounds in a step-by-step manner so as to complete disease severity classification. The leaf segmentation accuracy reached 93.27%, the Dice coefficient of disease spot segmentation reached 0.6914, and the average disease severity classification accuracy reached 92.85%. Compared with other models, the model proposed in this study has higher robustness, segmentation precision and classification accuracy, providing important ideas and methods for classifying the severity of cucumber leaf diseases in complex backgrounds.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call