Abstract

The You Only Look Once (YOLO) deep learning model iterations—YOLOv7–YOLOv8—were put through a rigorous evaluation process to see how well they could recognize oil palm plants. Precision, recall, F1-score, and detection time metrics are analyzed for a variety of configurations, including YOLOv7x, YOLOv7-W6, YOLOv7-D6, YOLOv8s, YOLOv8n, YOLOv8m, YOLOv8l, and YOLOv8x. YOLO label v1.2.1 was used to label a dataset of 80,486 images for training, and 482 drone-captured images, including 5,233 images of oil palms, were used for testing the models. The YOLOv8 series showed notable advancements; with 99.31 %, YOLOv8m obtained the greatest F1-score, signifying the highest detection accuracy. Furthermore, YOLOv8s showed a notable decrease in detection times, improving its suitability for comprehensive environmental surveys and in-the-moment monitoring. Precise identification of oil palm trees is beneficial for improved resource management and less environmental effect; this supports the use of these models in conjunction with drone and satellite imaging technologies for agricultural economic sustainability and optimal crop management.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.