Abstract

Large-scale, high-accuracy, and adaptive three-dimensional (3D) perception are the basic technical requirements for constructing a practical and stable fruit-picking robot. The latest vision-based fruit-picking robots have been able to adapt to the complex background, uneven lighting and low color contrast of the orchard environment. However, most of them have, until now, been limited to a small field of view or rigid sampling manners. Although the simultaneous localization and mapping (SLAM) methods have the potential to realize large scale sensing, it was critically revealed in this study that the classic SLAM pipeline is not completely adapted to orchard picking tasks. In this study, the eye-in-hand stereo vision and SLAM system were integrated to provide detailed global map supporting long-term, flexible and large-scale orchard picking. To be specific, a mobile robot based on eye-in-hand vision was built and an effective hand-eye calibration method was proposed; a state-of-the-art object detection network was trained and used to establish a dynamic stereo matching method adapted well to complex orchard environments; a SLAM system was deployed and combined with the above eye-in-hand stereo vision system to obtain a detailed, wide 3D orchard map. The main contribution of this work is to build a new global mapping framework compatible to the nature of orchard picking tasks. Compared with the existing studies, this work pays more attention to the structural details of the orchard. Experimental results indicated that the constructed global map achieved both large-scale and high-resolution. This is an exploratory work providing theoretical and technical references for the future research on more stable, accurate and practical mobile fruit picking robots.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call