Abstract

Precision agriculture is a growing field in the agricultural industry and it holds great potential in fruit and vegetable harvesting. In this work, we present a robust accurate method for the detection and localization of the peduncle of table grapes, with direct implementation in automatic grape harvesting with robots. The bunch and peduncle detection methods presented in this work rely on a combination of instance segmentation and monocular depth estimation using Convolutional Neural Networks (CNN). Regarding depth estimation, we propose a combination of different depth techniques that allow precise localization of the peduncle using traditional stereo cameras, even with the particular complexity of grape peduncles. The methods proposed in this work have been tested on the WGISD (Embrapa Wine Grape Instance Segmentation) dataset, improving the results of state-of-the-art techniques. Furthermore, within the context of the EU project CANOPIES, the methods have also been tested on a dataset of 1,326 RGB-D images of table grapes, recorded at the Corsira Agricultural Cooperative Society (Aprilia, Italy), using a Realsense D435i camera located at the arm of a CANOPIES two-manipulator robot developed in the project. The detection results on the WGISD dataset show that the use of RGB-D information (mAP=0.949) leads to superior performance compared to the use of RGB data alone (mAP=0.891). This trend is also evident in the CANOPIES Grape Bunch and Peduncle dataset, where the mAP for RGB-D images (mAP=0.767) outperforms that of RGB data (mAP=0.725). Regarding depth estimation, our method achieves a mean squared error of 2.66 cm within a distance of 1 m in the CANOPIES dataset.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.