The unstructured nature of orchard environments presents significant challenges for autonomous navigation of orchard robots. Teleoperation, combined with virtual reality (VR), has emerged as a promising solution to overcome the limitations of on-board autonomous navigation capabilities in general-purpose robots. However, constructing accurate and semantically meaningful VR maps for orchard environments remains a challenge. This study proposes a novel VR map construction framework for orchard robot teleoperation visualization. It comprises two key components: dual-source combined positioning and semantic segmentation based on sparse point clouds. First, a novel sliding window-based data fusion approach is proposed to address the positioning challenges in semi-obstructed orchard environments. This approach combines GNSS data and laser matching to achieve robust and accurate positioning for orchard robots. Second, the framework capitalizes on the installation characteristics of mobile robot radars to achieve real-time classification of sparse point clouds. This allows for the removal of unstable elements, such as leaves, which would hinder effective teleoperation. By integrating these components, a VR map suitable for VR teleoperation of orchard robots is achieved. To validate the effectiveness of the proposed method, a VR teleoperation platform was constructed. The experimental results demonstrate that the method successfully fulfills the fundamental requirements for map visualization during VR remote operation of orchard robots. Furthermore, this study provides a valuable reference for the application of digital twins in agricultural robotics.
Read full abstract