Extracting the navigation line of crop seedlings is significant for achieving autonomous visual navigation of smart agricultural machinery. Nevertheless, in field management of crop seedlings, numerous available studies involving navigation line extraction mainly focused on specific growth stages of specific crop seedlings so far, lacking a generalizable algorithm for addressing challenges under complex cross-growth-stage seedling conditions. In response to such challenges, we proposed a generalizable navigation line extraction algorithm using classical image processing technologies. First, image preprocessing is performed to enhance the image quality and extract distinct crop regions. Redundant pixels can be eliminated by opening operation and eight-connected component filtering. Then, optimal region detection is applied to identify the fitting area. The optimal pixels of plantation rows are selected by cluster-centerline distance comparison and sigmoid thresholding. Ultimately, the navigation line is extracted by linear fitting, representing the autonomous vehicle’s optimal path. An assessment was conducted on a sugarcane dataset. Meanwhile, the generalization capacity of the proposed algorithm has been further verified on corn and rice datasets. Experimental results showed that for seedlings at different growth stages and diverse field environments, the mean error angle (MEA) ranges from 0.844° to 2.96°, the root mean square error (RMSE) ranges from 1.249° to 4.65°, and the mean relative error (MRE) ranges from 1.008% to 3.47%. The proposed algorithm exhibits high accuracy, robustness, and generalization. This study breaks through the shortcomings of traditional visual navigation line extraction, offering a theoretical foundation for classical image-processing-based visual navigation.
Read full abstract