ABSTRACT Applications of convolutional neural network (CNN)-based object detectors in agriculture have been a popular research topic in recent years. However, complicated agricultural environments bring many difficulties for ground truth annotation as well as potential uncertainties for image data quality. Using YOLOv4 as a representation of state-of-the-art object detectors, this study quantified YOLOv4’s sensitivity against artificial image distortions including white noise, motion blur, hue shift, saturation change, and intensity change, and examined the importance of various training dataset attributes based on model classification accuracies, including dataset size, label quality, negative sample presence, image sequence, and image distortion levels. The YOLOv4 model trained and validated on the original datasets failed at 31.91% white noise, 22.05-pixel motion blur, 77.38° hue clockwise shift, 64.81° hue counterclockwise shift, 89.98% saturation decrease, 895.35% saturation increase, 79.80% intensity decrease, and 162.71% intensity increase with 30% mean average precisions (mAPs) for four apple flower bud growth stages. The performance of YOLOv4 decreased with both declining training dataset size and training image label quality. Negative samples and training image sequence did not make a substantial difference in model performance. Incorporating distorted images during training improved the classification accuracies of YOLOv4 models on noisy test datasets by 13 to 390%. In the context of apple flower bud growth-stage classification, except for motion blur, YOLOv4 is sufficiently robust for potential image distortions by white noise, hue shift, saturation change, and intensity change in real life. Training image label quality and training instance number are more important factors than training dataset size. Exposing models to test-image-alike training images is crucial for optimal model classification accuracies. The study enhances understanding of implementing object detectors in agricultural research.
Read full abstract