Abstract

Producing food in a sustainable way is becoming very challenging today due to the lack of skilled labor, the unaffordable costs of labor when available, and the limited returns for growers as a result of low produce prices demanded by big supermarket chains in contrast to ever-increasing costs of inputs such as fuel, chemicals, seeds, or water. Robotics emerges as a technological advance that can counterweight some of these challenges, mainly in industrialized countries. However, the deployment of autonomous machines in open environments exposed to uncertainty and harsh ambient conditions poses an important defiance to reliability and safety. Consequently, a deep parametrization of the working environment in real time is necessary to achieve autonomous navigation. This article proposes a navigation strategy for guiding a robot along vineyard rows for field monitoring. Given that global positioning cannot be granted permanently in any vineyard, the strategy is based on local perception, and results from fusing three complementary technologies: 3D vision, lidar, and ultrasonics. Several perception-based navigation algorithms were developed between 2015 and 2019. After their comparison in real environments and conditions, results showed that the augmented perception derived from combining these three technologies provides a consistent basis for outlining the intelligent behavior of agricultural robots operating within orchards.

Highlights

  • T HE turn of the 21st century coincided with the appearance of off-the-shelf commercial stereo cameras, which made 3D perception accessible to many on-vehicle and outdoors applications due to their compactness, easy connectivity, and reasonably fast correlation algorithms that solved the stereo matching in real time

  • Previous attempts [1] showed the great potential of 3D perception in general, and stereo vision in particular, but had required bulky rigs where physically keeping the stereo geometry of binocular assemblies, developing their own matching algorithms, and finding capable computers to make calculations fast, practically discouraged any chance to work outdoors from moving vehicles

  • The evaluation of a perception system for autonomous navigation must be based on the real capacity of the auto-steered vehicle to navigate safely in relevant environments while executing a task efficiently

Read more

Summary

Introduction

T HE turn of the 21st century coincided with the appearance of off-the-shelf commercial stereo cameras, which made 3D perception accessible to many on-vehicle and outdoors applications due to their compactness, easy connectivity, and reasonably fast correlation algorithms that solved the stereo matching in real time. Previous attempts [1] showed the great potential of 3D perception in general, and stereo vision in particular, but had required bulky rigs where physically keeping the stereo geometry of binocular assemblies, developing their own matching algorithms, and finding capable computers to make calculations fast, practically discouraged any chance to work outdoors from moving vehicles. Date of publication August 12, 2020; date of current version April 16, 2021. The associate editor coordinating the review of this article and approving it for publication was Dr Oleg Sergiyenko.

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.