Abstract
Quality grading and sorting are crucial post-harvest procedures to increase the market value of crops. To reduce the over-dependence on manual work for these tasks, automated sorting methods have been studied. However, we find existing research focuses mainly on round-like fruits and vegetables, and most machines can only grade one object at a time in their vision pipeline. To address these limitations, this paper proposes a comprehensive framework specifically for detecting and analyzing rod-like crops based on multi-object oriented detection. Zizania shoots are primarily used to validate our methods. To implement the deep learning models, an efficient oriented bounding box label software called OBBLabel is developed, and several large-scale image datasets are constructed. Both of the software and datasets are open-sourced for the community. Based on YOLOv8 architecture, the proposed YOLO-OBB model predicts oriented bounding boxes to extract all rod-like targets in image with 0.903 mAP@0.5. A multi-label recognition model YOLO-MLD then conducts quality grading and posture perception on individual target with 93.4% mean accuracy. Thus, precise position and quality information for all objects can be obtained in near real-time for subsequent suction cup-based sorting operations. Furthermore, according to the framework, this paper designs and manufactures a prototype sorting machine with edge intelligence for rod-like crops.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have