Abstract

Accurate depth-sensing is crucial in securing a high success rate of robotic harvesting in natural orchard environments. The solid-state LiDAR technique, a recently introduced LiDAR sensor, can perceive high-resolution geometric information of the scenes, which can be utilised to receive accurate depth information. Meanwhile, the fusion of the sensory data from LiDAR and the camera can significantly enhance the sensing ability of the harvesting robots. This work first introduces a LiDAR-camera fusion-based visual sensing and perception strategy to perform accurate fruit localisation in the apple orchards. Two SOTA LiDAR-camera extrinsic calibration methods are evaluated to obtain the accurate extrinsic matrix between the LiDAR and camera. After that, the point clouds and colour images are fused to perform fruit localisation using a one-stage instance segmentation network. In addition, comprehensive experiments show that LiDAR-camera achieves better visual sensing performance in natural environments. Meanwhile, introducing the LiDAR-camera fusion can largely improve the accuracy and robustness of the fruit localisation. Specifically, the standard deviations of fruit localisation using LiDAR-camera at 0.5, 1.2, and 1.8 m are 0.253, 0.230, and 0.285 cm, respectively, during the afternoon with intensive sunlight. This measurement error is much smaller compared with that from Realsense D455. Lastly, visualised point cloud22https://drive.google.com/drive/folders/16NV0Bb6N-zlvJC0bFyu-8pl4Gbh9lyOE?usp=sharing. of the apple trees have been attached to demonstrate the highly accurate sensing results of the proposed Lidar-camera fusion method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call