Abstract
Deep learning models offer valuable insights by leveraging large datasets, enabling precise and strategic decision-making essential for modern agriculture. Despite their potential, limited research has focused on the performance of pixel-based deep learning algorithms for detecting and mapping weed canopy cover. This study aims to evaluate the effectiveness of three neural network architectures—U-Net, DeepLabV3 (DLV3), and pyramid scene parsing network (PSPNet)—in mapping weed canopy cover in winter wheat. Drone data collected at the jointing and booting growth stages of winter wheat were used for the analysis. A supervised deep learning pixel classification methodology was adopted, and the models were tested on broadleaved weed species, winter wheat, and other weed species. The results show that PSPNet outperformed both U-Net and DLV3 in classification performance, with PSPNet achieving the highest overall mapping accuracy of 80%, followed by U-Net at 75% and DLV3 at 56.5%. These findings highlight the potential of pixel-based deep learning algorithms to enhance weed canopy mapping, enabling farmers to make more informed, site-specific weed management decisions, ultimately improving production and promoting sustainable agricultural practices.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have