Abstract

Deep learning has demonstrated significant capabilities for learning image features and presents many opportunities for agricultural automation. Deep neural networks typically require large and diverse training datasets to learn generalizable models. However, this requirement is challenging for applications in agricultural automation systems, since collecting and annotating large amount of training samples from filed crops and greenhouses is an expensive and complicated process due to the large diversity of crops, growth seasons and climate changes. This research proposed a new method for augmenting training dataset using synthesized images that preserves the background context and texture of the data object. A synthetic dataset of 1800 images was generated using a reference dataset and applying image processing techniques. As reference dataset 100 and for evaluating detection performance 230 real images of strawberry flowers were collected in greenhouses. Experimental results demonstrated that the suggested method provides improved performance when applied to the state-of-the-arts convolutional object detectors including Faster R-CNN, SSD, YOLOv3 and CenterNet for the task of strawberry flower detection in non-structured environment. The YOLOv3 w/darknet53 model achieved 46.84% boost in performance with average precision (AP) improved from 39.20% to 86.04% when applied augmentation using synthetic dataset. The AP of Faster R-CNN w/resnet50, SSD w/resnet50 and FPN and CenterNet w/hourglass52 models improved by 15.71, 18.42 and 22.24%, respectively. The Faster R-CNN w/resnet50 model provided most significant strawberry flower detection performance with AP 90.84%, which is higher than SSD w/resnet50 and FPN, YOLOv3 w/darknet53 and CenterNet w/hourglass52 models (88.56%, 86.04 % and 83.82%, respectively). Keywords: Flower detection, deep convolutional neural network, data augmentation, synthetic dataset.

Highlights

  • In recent years, automation in agriculture has motivated by concerns over increasing demand of productivity and quality of food production whilst decreasing the pressure on resources required (Bac et al, 2014)

  • Experimental results demonstrated that the suggested method provides improved performance when applied to the state-of-the-arts convolutional object detectors including Faster RCNN, Single Shot Detector (SSD), YOLOv3 and CenterNet for the task of strawberry flower detection in non-structured environment

  • The Faster R-convolutional neural networks (CNNs) w/resnet50 model provided most significant strawberry flower detection performance with average precision (AP) 90.84%, which is higher than SSD w/resnet50 and feature pyramid network (FPN), YOLOv3 w/darknet53 and CenterNet w/hourglass52 models (88.56%, 86.04 % and 83.82%, respectively)

Read more

Summary

INTRODUCTION

Automation in agriculture has motivated by concerns over increasing demand of productivity and quality of food production whilst decreasing the pressure on resources required (Bac et al, 2014). In most computer vision-based applications where CNNs show a significant progress over hand-engineered methods, such as image segmentation, classification, and object detection in a scene, the size of the training dataset is typically on the order of tens of thousands to tens of millions of images (Deng et al, 2009) This allows for much diversity in the training samples, and very robust learned models as a consequence. Several researchers employed basic augmentation methods such as rotations (Namin et al, 2018), cropping and flipping/mirroring (Dcunha et al, 2017), scaling (De Brabandere et al, 2017) and color transformation (Dias et al, 2018); and achieved improved performance in classification, target detection and instance segmentation task in agriculture These transformations provide limited number of augmentations, cannot help much in conditions when a small number of training samples are available. Using the synthesized images alongside real training data, this work demonstrated the applicability of the proposed method to boost performances of modern convolutional object detection networks including Faster

Background image
MATERIALS AND METHODS
Evaluation of guided collage composition algorithm
Evaluation metrics
RESULTS AND DISCUSSION
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.