Abstract

HighlightsCorn damage detection was possible using advanced deep learning and computer vision techniques trained with images of simulated corn lodging.RetinaNet and YOLOv2 both worked well at identifying regions of lodged corn.Automating crop damage identification could provide useful information to producers and other stakeholders from visual-band UAS imagery.Abstract. Severe weather events can cause large financial losses to farmers. Detailed information on the location and severity of damage will assist farmers, insurance companies, and disaster response agencies in making wise post-damage decisions. The goal of this study was a proof-of-concept to detect areas of damaged corn from aerial imagery using computer vision and deep learning techniques. A specific objective was to compare existing object detection algorithms to determine which is best suited for corn damage detection. Simulated corn lodging was used to create a training and analysis data set. An unmanned aerial system equipped with an RGB camera was used for image acquisition. Three popular object detectors (Faster R-CNN, YOLOv2, and RetinaNet) were assessed for their ability to detect damaged areas. Average precision (AP) was used to compare object detectors. RetinaNet and YOLOv2 demonstrated robust capability for corn damage identification, with AP ranging from 98.43% to 73.24% and from 97.0% to 55.99%, respectively, across all conditions. Faster R-CNN did not perform as well as the other two models, with AP between 77.29% and 14.47% for all conditions. Detecting corn damage at later growth stages was more difficult for all three object detectors. Keywords: Computer vision, Faster R-CNN, RetinaNet, Severe weather, Smart farming, YOLO.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call