Abstract

AbstractCrop root segmentation models developed through deep learning have increased the throughput of in situ crop phenotyping studies. However, models trained to identify roots in one image dataset may not accurately identify roots in another dataset, especially when the new dataset contains known differences, called domain shifts. The objective of this study was to quantify how model performance changes when models are used to segment image datasets that contain domain shifts and evaluate approaches to reduce error associated with domain shifts. We collected maize root images at two growth stages (V7 and R2) in a field experiment and manually segmented images to measure total root length (TRL). We developed five segmentation models and evaluated each model's ability to handle a temporal (growth‐stage) domain shift. For the V7 growth stage, a growth‐stage‐specific model trained only on images captured at the V7 growth stage was best suited for measuring TRL. At the R2 growth stage, combining images from both growth stages into a single dataset to train a model resulted in the most accurate TRL measurements. We applied two of the field models to images from a greenhouse experiment to evaluate how model performance changed when exposed to a cross‐site domain shift. Field models were less accurate than models trained only on the greenhouse images even when crop growth stage was identical. Although models may perform well for one experiment, model error increases when applied to images from different experiments even when crop species, growth stage, and soil type are similar.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call