Abstract

In recent years, precision agriculture has been researched to increase crop production with less inputs, as a promising means to meet the growing demand of agriculture products. Computer vision-based crop detection with unmanned aerial vehicle (UAV)-acquired images is a critical tool for precision agriculture. However, object detection using deep learning algorithms rely on a significant amount of manually prelabeled training datasets as ground truths. Field object detection, such as bales, is especially difficult because of (1) long-period image acquisitions under different illumination conditions and seasons; (2) limited existing prelabeled data; and (3) few pretrained models and research as references. This work increases the bale detection accuracy based on limited data collection and labeling, by building an innovative algorithms pipeline. First, an object detection model is trained using 243 images captured with good illimitation conditions in fall from the crop lands. In addition, domain adaptation (DA), a kind of transfer learning, is applied for synthesizing the training data under diverse environmental conditions with automatic labels. Finally, the object detection model is optimized with the synthesized datasets. The case study shows the proposed method improves the bale detecting performance, including the recall, mean average precision (mAP), and F measure (F1 score), from averages of 0.59, 0.7, and 0.7 (the object detection) to averages of 0.93, 0.94, and 0.89 (the object detection + DA), respectively. This approach could be easily scaled to many other crop field objects and will significantly contribute to precision agriculture.

Highlights

  • Received: 26 October 2020Accepted: 21 December 2020Published: 23 December 2020Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.license.According to the United Nations population estimates and projections, the growing world population will be nearly 10 billion in 2050 [1]

  • Computer vision helps with object detection and machine learning allows for useful information to be extracted from the collected data, showing tremendous advantages over the traditional methods applied in agriculture [4]

  • To minimize the discrepancy between the source domain and target domain regarding the domain distribution, we propose a model by combining the convolutional neural network (CNN)-based YOLOv3 model and domain adaption (DA), a representative method in transfer learning

Read more

Summary

Introduction

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Illumination variation, including the change in light conditions, with/without shadow covering, plays a significant role in object detection in the context of outdoor practices. The illumination and hue method change by arecollecting the most data significant factors impacting the bale detection model performance. The source domain we andalso target domain will decrease the difficulties of shaping accurate classification models built on the deep learning label the the images collected from other conditions to test the accuracy of the prediction difficulties of shaping accurate classification models built on the deepgiven learning architecture. The collecting images bylabels using a implementing strobe lighting mounted onfor a ground robot image-capturing machine to compensate for the hue variation, to build a reliable deep learning model. Combined with our manually labeled data, we are able to provide a valuable training dataset of over 1000 bale images, which is publicly available after this publication

Computer Vision in Precision Agriculture
Transfer
Methodology
Framework
Example
Experiment Equipment
Bales Data Collection and Description
Experimental
Primary Bale Detection with
Augmenting
Augmenting the Training Data with CycleGAN Corresponding to Step 2
Description
10. Examples
Optimized YOLOv3 Model with Extended Datasets Corresponding to Step 3
We used
11. Examples
Findings
Comparison and Advantages
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call