Abstract

Agricultural robotics is nowadays a complex, challenging, and exciting research topic. Some agricultural environments present harsh conditions to robotics operability. In the case of steep slope vineyards, there are several challenges: terrain irregularities, characteristics of illumination, and inaccuracy/unavailability of signals emitted by the Global Navigation Satellite System (GNSS). Under these conditions, robotics navigation becomes a challenging task. To perform these tasks safely and accurately, the extraction of reliable features or landmarks from the surrounding environment is crucial. This work intends to solve this issue, performing accurate, cheap, and fast landmark extraction in steep slope vineyard context. To do so, we used a single camera and an Edge Tensor Processing Unit (TPU) provided by Google’s USB Accelerator as a small, high-performance, and low power unit suitable for image classification, object detection, and semantic segmentation. The proposed approach performs object detection using Deep Learning (DL)-based Neural Network (NN) models on this device to detect vine trunks. To train the models, Transfer Learning (TL) is used on several pre-trained versions of MobileNet V1 and MobileNet V2. A benchmark between the two models and the different pre-trained versions is performed. The models are pre-trained in a built in-house dataset, that is publicly available containing 336 different images with approximately 1,600 annotated vine trunks. There are considered two vineyards, one using camera images with the conventional infrared filter and others with an infrablue filter. Results show that this configuration allows a fast vine trunk detection, with MobileNet V2 being the most accurate retrained detector, achieving an overall Average Precision of 52.98%. We briefly compare the proposed approach with the state-of-the-art Tiny YOLO-V3 running on Jetson TX2, showing the outperformance of the adopted system in this work. Additionally, it is also shown that the proposed detectors are suitable for the Localization and Mapping problems.

Highlights

  • The research and development of robotic solutions for the agriculture sector have been growing [1], [2]

  • In order to evaluate the trained Neural Network (NN) on top of Google’s Edge Tensor Processing Unit (TPU), a subset of the dataset previously described was extracted for testing

  • From the total of 336 images on the dataset, 45 images were used for the test procedure, with approximately 180 vine trunks

Read more

Summary

Introduction

The research and development of robotic solutions for the agriculture sector have been growing [1], [2]. The need for automatic machines in this area is increasing since farmers increasingly recognize its impact in agriculture [3]. Harvesting, environmental monitoring, supply of water and nutrients, and others [4]. In this context, developing solutions that allow robots to navigate safely in these environments is essential. To do so, localizing the robotic platform in real-time is required. In vineyards built in steep slope hills, the use of the GNSS is, in most cases, unavailable due to signal blockage and multi-reflection. Several solutions redundant to GNSS have been developed. Simultaneous Localization and Mapping (SLAM) and Visual Odometry

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call