Abstract

Knowing before harvesting how many plants have emerged and how they are growing is key in optimizing labour and efficient use of resources. Unmanned aerial vehicles (UAV) are a useful tool for fast and cost efficient data acquisition. However, imagery need to be converted into operational spatial products that can be further used by crop producers to have insight in the spatial distribution of the number of plants in the field. In this research, an automated method for counting plants from very high-resolution UAV imagery is addressed. The proposed method uses machine vision—Excess Green Index and Otsu’s method—and transfer learning using convolutional neural networks to identify and count plants. The integrated methods have been implemented to count 10 weeks old spinach plants in an experimental field with a surface area of 3.2 ha. Validation data of plant counts were available for 1/8 of the surface area. The results showed that the proposed methodology can count plants with an accuracy of 95% for a spatial resolution of 8 mm/pixel in an area up to 172 m2. Moreover, when the spatial resolution decreases with 50%, the maximum additional counting error achieved is 0.7%. Finally, a total amount of 170 000 plants in an area of 3.5 ha with an error of 42.5% was computed. The study shows that it is feasible to count individual plants using UAV-based off-the-shelf products and that via machine vision/learning algorithms it is possible to translate image data in non-expert practical information.

Highlights

  • Crop monitoring is a critical function within the agricultural production chain

  • When the spatial resolution decreases with 50%, the maximum additional counting error achieved is 0.7%

  • The results provided by the authors for what plant counting concerned are very limited

Read more

Summary

Introduction

Crop monitoring is a critical function within the agricultural production chain. Crop monitoring required manual labour, as the field manager or the land owner still had to monitor the crops physically. This is time consuming and can be prone to human error (Li et al 2019). These images consisted of a single row with different lengths. These images were cut into 1 m, 2 m, 4 m, 5 m, and 10 m lengths. Which means that plants are counted with a maximum error of 3 plants (10 m row image), and 1 to 0.5 plants for the remaining row images

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call