This paper addresses the challenge of accurately estimating bee orientations on beehive landing boards, which is crucial for optimizing beekeeping practices and enhancing agricultural productivity. The research utilizes YOLOv8 pose models, trained on a dataset created using an open-source computer vision annotation tool. The annotation process involves associating bounding boxes with keypoints to represent bee orientations, with each bee annotated using two keypoints: one for the head and one for the stinger. The YOLOv8-pose models demonstrate high precision, achieving 98% accuracy for both bounding box and keypoint detection in 1024×576 px images. However, trade-offs between model size and processing speed are addressed, with the smaller nano model reaching 67 frames per second on 640×384 px images. The entrance ramp detection model achieves 91.7% intersection over union across four keypoints, making it effective for detecting the hive’s landing board. The paper concludes with plans for future research, including the behavioral analysis of bee colonies and model optimization for real-time applications.
Read full abstract