Abstract

AbstractAutonomous navigation in agricultural environments is challenged by varying field conditions that arise in arable fields. State‐of‐the‐art solutions for autonomous navigation in such environments require expensive hardware, such as Real‐Time Kinematic Global Navigation Satellite System. This paper presents a robust crop row detection algorithm that withstands such field variations using inexpensive cameras. Existing data sets for crop row detection do not represent all the possible field variations. A data set of sugar beet images was created representing 11 field variations comprised of multiple grow stages, light levels, varying weed densities, curved crop rows, and discontinuous crop rows. The proposed pipeline segments the crop rows using a deep learning‐based method and employs the predicted segmentation mask for extraction of the central crop using a novel central crop row selection algorithm. The novel crop row detection algorithm was tested for crop row detection performance and the capability of visual servoing along a crop row. The visual servoing‐based navigation was tested on a realistic simulation scenario with the real ground and plant textures. Our algorithm demonstrated robust vision‐based crop row detection in challenging field conditions outperforming the baseline.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.