The unmanned pineapple harvester is required to operate efficiently along crop rows. This study presents a novel approach for extracting visual navigation line in pineapple fields using an enhanced YOLOv5 algorithm. The objective is to overcome challenges related to positioning accuracy rate and stability encountered by unmanned pineapple harvesters. The improvements made to the YOLOv5 algorithm include the addition of a small object detection layer in the neck layer. Furthermore, modifications were made to the original loss function to enhance training stability and increase training speed. Experimental results demonstrate that the enhanced YOLOv5 model achieved a 3.62 % increase in training precision rate and a 2.15 % increase in recall rate compared to the original model. With an image resolution of 960 × 1080 pixels, the average detection speed reached 17.35 frames per second (fps). The enhanced YOLOv5 algorithm was utilized to recognize and extract feature points representing pineapple rows. A clustering algorithm was employed to classify these feature points by row, while a modified shortest distance algorithm was applied to fit the pineapple crop row centerline and calculate the optimal navigation line. Field experiments revealed an average extraction accuracy rate of 89.13 % and 85.32 % for sunny and cloudy weather in high-density crops, respectively, with an average accuracy rate of 85.74 %, row recognition accuracy rate of 89.29 % and angle error of 3.54° in different density crops, respectively. This algorithm offers a method for obtaining navigation paths for unmanned pineapple harvesters.
Read full abstract