Abstract

Modern, automated quality control systems for speciality crops utilise computer vision together with a machine learning paradigm exploiting large datasets for learning efficient crop assessment components. To model anomalous visuals, data augmentation methods are often developed as a simple yet powerful tool for manipulating readily available normal samples. State-of-the-art augmentation methods embed arbitrary “structural” peculiarities in normal images to build a classifier of these artefacts (i.e., pretext task), enabling self-supervised representation learning of visual signals for anomaly detection (i.e., downstream task). In this paper, however, we argue that learning such structure-sensitive representations may be suboptimal for agricultural anomalies (e.g., unhealthy crops) that could be better recognised by a different type of visual element like “colour”.To be specific, we propose Channel Randomisation (CH-Rand)—a novel data augmentation method that forces deep neural networks to learn effective encoding of “colour irregularities” under self-supervision whilst performing a pretext task to discriminate channel-randomised images. Extensive experiments are performed across various types of speciality crops (apples, strawberries, oranges, and bananas) to validate the informativeness of learnt representations in detecting anomalous instances. Our results demonstrate that CH-Rand’s representations are significantly more reliable and robust, outperforming state-of-the-art methods (e.g., CutPaste) that learn structural representations by over 43% in Area Under the Precision–Recall Curve (AUC–PR), particularly for strawberries. Additional experiments suggest that adopting the L∗a∗b∗ colour space and “curriculum” learning in the pretext task — gradually disregarding channel combinations for unrealistic outcomes — further improves downstream-task performance by 16% in AUC–PR. In particular, our experiments employ Riseholme-2021, a novel speciality crop dataset consisting of 3.5K real strawberry images gathered in situ from the real farm, along with the Fresh & Stale public dataset. All our code and datasets are made publicly available online to ensure reproducibility and encourage further research in agricultural technologies.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.