Abstract

Automated robotic platforms are an important part of precision agriculture solutions for sustainable food production. Agri-robots require robust and accurate guidance systems in order to navigate between crops and to and from their base station. Onboard sensors such as machine vision cameras offer a flexible guidance alternative to more expensive solutions for structured environments such as scanning lidar or RTK-GNSS. The main challenges for visual crop row guidance are the dramatic differences in appearance of crops between farms and throughout the season and the variations in crop spacing and contours of the crop rows. Here we present a visual guidance pipeline for an agri-robot operating in strawberry fields in Norway that is based on semantic segmentation with a convolution neural network (CNN) to segment input RGB images into crop and not-crop (i.e., drivable terrain) regions. To handle the uneven contours of crop rows in Norway’s hilly agricultural regions, we develop a new adaptive multi-ROI method for fitting trajectories to the drivable regions. We test our approach in open-loop trials with a real agri-robot operating in the field and show that our approach compares favourably to other traditional guidance approaches.

Highlights

  • Automating agricultural practices through the use of robots is a key strategy for improving farm productivity and achieving sustainable food production to meet the needs of future generations

  • Real-time kinematic (RTK) GNSS provides an accurate position for the robot but does not inherently describe the location or extent of the crops

  • The usage of semantic segmentation gives the flexibility of using the labels of interest

Read more

Summary

Introduction

Automating agricultural practices through the use of robots (i.e., agri-robots) is a key strategy for improving farm productivity and achieving sustainable food production to meet the needs of future generations. A basic requirement for such robots is to be able to navigate autonomously to and from their base station and along the crop rows. Real-time kinematic (RTK) GNSS provides an accurate position for the robot but does not inherently describe the location or extent of the crops. Onboard sensors such as scanning lasers [4] or machine vision cameras [5] can enable the robot to sense the crops and structures surrounding the robot directly. Lidar-based methods work best in structured environments such as greenhouses, and traditional visual approaches rely on distinct and regular crop rows [6]

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call