Abstract

Reliable and robust fruit-detection algorithms in nonstructural environments are essential for the efficient use of harvesting robots. The pose of fruits is crucial to guide robots to approach target fruits for collision-free picking. To achieve accurate picking, this study investigates an approach to detect fruit and estimate its pose. First, the state-of-the-art mask region convolutional neural network (Mask R-CNN) is deployed to segment binocular images to output the mask image of the target fruit. Next, a grape point cloud extracted from the images was filtered and denoised to obtain an accurate grape point cloud. Finally, the accurate grape point cloud was used with the RANSAC algorithm for grape cylinder model fitting, and the axis of the cylinder model was used to estimate the pose of the grape. A dataset was acquired in a vineyard to evaluate the performance of the proposed approach in a nonstructural environment. The fruit detection results of 210 test images show that the average precision, recall, and intersection over union (IOU) are 89.53, 95.33, and 82.00%, respectively. The detection and point cloud segmentation for each grape took approximately 1.7 s. The demonstrated performance of the developed method indicates that it can be applied to grape-harvesting robots.

Highlights

  • Grapes have become one of the most globally popular fruits because of their desired taste and rich nutrition

  • This study proposes an algorithm for grape detection and point cloud segmentation that provides high precision, recall, and intersection over union (IOU)

  • The detection and point cloud segmentation for each grape takes approximately 1.7 s, which meets the requirements of real time operations for harvesting robots

Read more

Summary

Introduction

Grapes have become one of the most globally popular fruits because of their desired taste and rich nutrition. With an aging population and reduced agricultural labor force in China, it is urgent to develop automated grape-harvesting robots capable of working in the field (Lin et al, 2019). Scholars around the world have studied fruit-harvesting robots using primarily machine vision (Tang et al, 2020b), such as for sweet peppers (Bac et al, 2017), cucumbers (Van Henten et al, 2003), strawberries (Hayashi et al, 2010; Feng et al, 2012; Han et al, 2012), litchi (Wang et al, 2016), apples (De-An et al, 2011; Wang et al, 2017), and grapes (Botterill et al, 2017). Many harvesting robots have emerged, fruit-detection systems are still a fragile link, especially for harvesting robots in the face of complexity from nonstructural environments of orchards and unstructured features of fruits

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call