Abstract

To improve the accuracy of cherry fruit recognition during picking operations and to enable automated cherry picking by robots, both original and complex sample image sets were collected at the orchard site in two phases. Concurrently, the basic data set, enhanced data set, and task data set were generated, wherein the recognition targets were all mature cherry fruits and the recognition targets were cherry fruits that became within the current operation range. First, the basic and enhanced data sets were used for comparative training on you only look once (YOLO) v3. The test results demonstrated that the ability of YOLO v3 to recognize occluded and overlapping cherry fruits can be greatly enhanced by increasing the quantity and percentage of samples in the training set that have high levels of occlusion and overlap. Second, the enhanced data set was used to train the Faster R-CNN, SSD, YOLO v3, and YOLO v5 networks. The test results indicated that YOLO v5 had the best recognition effect on mature cherries. Next, the enhanced data set and the task data set were used to train YOLO v5. Experiments revealed that YOLO v5 was more successful in identifying cherry fruits that needed to be picked within the current task range, while shielding mature cherries that should not be picked outside of it. 96% of the data was precise, and 99% of the data was recall.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.