Abstract

Computer vision tasks, such as image recognition, object detection, and semantic segmentation, have contributed tremendously to autonomous harvesting. However, these tasks are applicable to the visible parts of an object in an image. In the field of agriculture, detecting green vegetables with subtle color differences poses an additional challenge. The nature of cucumbers (Cucumis sativus) makes them difficult to detect. To address this issue, we reconstructed the occluded part of the cucumber to help autonomous robots detect and locate picking-point positions. A dataset of cucumber images from two farms located in South Korea was generated. The dataset was superimposed with synthetic leaf patches to simulate the effect of occlusion. Using this dataset, we employed amodal segmentation with an auto-encoder and an ablation study regarding shape prior post-process, shape prior refinement, and feature matching to determine a suitable method for our cucumber dataset. We then proposed amodal segmentation with a U-net reconstruction network as a novel model for cucumber occlusion recovery. In the ablation study, the model with no additional process showed the highest accuracy, average precision (AP) of 49.31 and average precision with intersection over union (IoU) 0.5 (AP50) of 82.39, and the fastest inference time of 233 ms/image. Our proposed model outperformed auto-encoder-based models with an AP of 50.06, an AP50 of 82.43, and an inference time of 220 ms/image. The proposed method has been shown to be effective in improving the accuracy of cucumber segmentation under occlusion conditions. Therefore, amodal segmentation, particularly with the U-net reconstruction network, seems promising for the vision systems of cucumber picking robots. The main contribution of the study is that with amodal segmentation, detection of occluded cucumber instances can be done in a single stage with promising accuracy and speed, thus no longer requiring additional time-consuming operations of the manipulators in harvest decision making.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.