Abstract

An apple-picking robot is now the most widely accepted method in the substitution of low-efficiency and high-cost labor-intensive apple harvesting. Although most current research on apple-picking robots works well in the laboratory, most of them are unworkable in an orchard environment due to unsatisfied apple positioning performance. In general, an accurate, fast, and widely used apple positioning method for an apple-picking robot remains lacking. Some positioning methods with detection-based deep learning reached an acceptable performance in some orchards. However, apples occluded by apples, leaves, and branches are ignored in these methods with detection-based deep learning. Therefore, an apple binocular positioning method based on a Mask Region Convolutional Neural Network (Mask R-CNN, an instance segmentation network) was developed to achieve better apple positioning. A binocular camera (Bumblebee XB3) was adapted to capture binocular images of apples. After that, a Mask R-CNN was applied to implement instance segmentation of apple binocular images. Then, template matching with a parallel polar line constraint was applied for the stereo matching of apples. Finally, four feature point pairs of apples from binocular images were selected to calculate disparity and depth. The trained Mask R-CNN reached a detection and segmentation intersection over union (IoU) of 80.11% and 84.39%, respectively. The coefficient of variation (CoV) and positioning accuracy (PA) of binocular positioning were 5.28 mm and 99.49%, respectively. The research developed a new method to fulfill binocular positioning with a segmentation-based neural network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call