In the last few years, deep learning methods have been proposed to automate the quality control of various agricultural products. Despite the excellent results obtained, one of their main drawbacks is the need for a large annotated dataset to obtain satisfactory performance. Building this dataset is time-consuming and tedious. Moreover, in some real-world applications, vision systems can be modified over time. Such changes may generate a drop in network performance trained with the initial database (source images). To avoid the creation of a new labeled database every time a change in data distribution occurs (target images), several domain adaptation methods have been proposed. In this article, we introduce an unsupervised deep domain adaptation method based on adversarial training. A large dataset is used, including six classes of potatoes: healthy, damaged, greening, black dot, common scab, and black scurf. Two different scenarios of domain adaptation problem are considered. Firstly, a simply modification of the image acquisition system is simulated by artificially increasing the brightness of some white potatoes images (target images). Secondly, a significantly different dataset including red potatoes is introduced. In this setting, white potatoes are used as source images and red tubers as target images. We propose to train a target classifier using a pseudo-label loss, due to the unavailability of target annotations. Experimental results show that a domain adaptation method is mandatory, going from an average F1-score of 0.46 without adaptation, to 0.84 by applying our method. Finally, a comparative analysis is achieved showing that adversarial-based unsupervised domain adaptation methods outperform discrepancy-based approaches.
Read full abstract