Abstract

Weeds challenge crops by competing for resources and spreading diseases, impacting crop yield and quality. Effective weed detection can enhance herbicide application, thus reducing environmental and health risks. A major challenge in Site-Specific Weed Management (SSWM) is developing a reliable weed identification system, especially given the diversity and similarity between certain weeds and crops during early growth stages. Image-based deep learning (DL) methods have become vital for weed classification. However, accurate weed classification and detection using DL techniques face the bottleneck of requiring large labeled data. Furthermore, labeling this specific extensive data is a time-consuming and tedious task apart from necessitating weed science experts. This research's central focus is to present a novel approach to weed detection using convolutional neural network (CNN) classifiers, specifically Yolov8l and RetinaNet, augmented with Stable Diffusion data i.e., artificial weed images. Stable Diffusion enhanced the training data, increasing the classifiers' adaptability. The study targeted specific weeds (Solanum nigrum L.; Portulaca oleracea L.; Setaria Verticillata L.) found in tomato crops, using a limited number of real images (30 samples) to produce artificial training images for the CNNs. All validation and test sets are comprised of real weed images. Results showed high performance when using only artificial images in terms of Mean Average Precision (mAP). In isolated conditions (0.91 mAP), i.e., only one weed species per image, an average performance gain of about 3% in all tests is obtained. When adding the artificial images to the real ones (mixed dataset), a mAP of 0.99 is obtained. In contrast, results using only artificial images obtained 0.81 mAP when detecting more than a single weed species. However, when implementing the trained CNNs with a mixed dataset, a 6% − 9% performance gain was achieved in all cases. A mAP of up to 0.93 was achieved in the most challenging conditions where weed species could overlap. The results indicate that the proposed approach outperformed existing methods, such as Generative Adversarial Networks (GANs) regarding mAP. Furthermore, the Yolov8l model distinctly emerged as the most favorable option for real-time detection systems considering Frame Detection Speed (FDS). Specifically, the Yolov8l model registered an FDS of 10.2 ms, which is considerably faster when compared to the 21.2 ms that the RetinaNet model exhibited. Additionally, the method is versatile and applicable to various crops and weed species, thereby enhancing automated weed management systems. This research illustrates that Stable Diffusion can efficiently expand small image sets, significantly reducing field imaging. The study offers valuable insights for future SSWM efforts utilizing artificially generated images for weed detection and classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call