Abstract

Automatic recognition of vegetable diseases in complex backgrounds is an urgent need in the field of agricultural informatization. The recognition methods based on deep learning have achieved excellent performance in disease diagnosis and therefore have gradually become a research hotspot. However, the disease recognition models established based on deep convolutional neural networks usually need to be trained on huge disease image datasets so as to achieve an ideal outcome. Building such a kind of dataset requires a large amount of disease images and labeling information, which is often technically or economically infeasible. In this paper, a small-sample recognition model of vegetable diseases in complex backgrounds based on image text collaborative representation learning (ITC-Net) was proposed. This model combined the disease image modal information with the disease text modal information, so as to achieve collaborative recognition of disease features by utilizing the correlation and complementarity between the two types of disease information. Eventually, the ITC-Net achieved better results than either the image model or text model alone on a small dataset. To be more specific, its accuracy, precision, sensitivity and specificity are 99.48%, 98.90%, 98.78% and 99.66%, respectively. This paper proves that the multi-modal collaborative representation learning using both disease images and disease texts is an effective method to solve the problem of vegetable disease recognition in complex backgrounds with few-shot.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call