Abstract

Vision based autonomous navigation is widely used for agricultural robots. However, factors such as large area of weed, discontinuous crop rows, and differences in ambient lighting condition during different plant growth stages have brought challenges to autonomous robotic navigation in farms. This paper presents a vision based method of fusing vegetation index and ridge segmentation for robust and precise extraction of navigation lines in lettuce farms. Firstly, vegetation index from the captured image is computed, and farm ridges are extracted using a semantic segmentation net. Then, vegetation index and ridge segmentation result are fused to obtain plant segmentation result. Since the method only needs to segment ridges, it does not need tedious manual labeling of vegetable plants in pixels to train plant segmentation net, yet provides accurate and reliable plant segmentation. Secondly, a modified Progressive Sample Consensus (PROSAC) algorithm and a distance filtering are proposed to fit line using the center points of plants, which effectively eliminates outliers and extract reliable and accurate center line of the current lane for autonomous navigation. Comprehensive experiments are carried out to validate the effectiveness of the method. The results show that the proposed method outperforms the conventional methods based on only vegetation index or ridge segmentation, by effectively reducing the interference caused by weeds, irregular branches and leaves, and missing rows. The proposed method runs at 10 Frames Per Second (FPS), thus satisfies real-time navigation of robots. Although the proposed method is only demonstrated in lettuce farms, it can be naturally applied to other vegetable farms, e.g. broccoli and early stage sugar beet farms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call