Abstract

Recently, camera-based approaches have been proposed to replace expensive LiDAR or real-time kinematic (RTK) based solutions for the unmanned tractors operating in an orchard-like environment. However, constructing a completely autonomous navigation stack with a low-costly camera and onboard computer presents extra difficulties, such as a computationally efficient perception model, rapid path planning without costly SLAM, and the capacity to avoid obstacles. In this paper, a novel vision-based autonomous navigation stack is devised to address challenges of the wholly autonomous tractor using stereo camera and inertial measurement unit (IMU) that are inexpensive. The computational pipeline consists primarily of three modules: (1) the multi-task perception network, (2) the frame transformation algorithm, and (3) the motion planning module. The multi-task perception network detects tree trunks, obstacles, and traversable areas simultaneously from the RGB image with high efficiency (69 FPS) and accuracy (mAP@.5 of 96.7% and mIoU of 98.1%). For global path planning and trajectory prediction, the frame transformation algorithm integrates the downsized the navigational features and transforms them to the tractor frame from the image frame. The motion planning module then uses the fused data to plan the appropriate path (i.e., the center of the tree row, U-turn path, or J-turn path) and search for the optimal trajectory utilizing the optimized dynamic window approach (DWA) for path tracking. In the peach orchard, the proposed stack is implemented on our autonomous retrofitted tractor and its efficacy is demonstrated compared with the human-driving tractor.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call