Abstract

Automatic detection and counting of crop circles in the desert can be of great use for large-scale farming as it enables easy and timely management of the farming land. However, so far, the literature remains short of relevant contributions in this regard. This letter frames the crop circles detection problem within a deep learning framework. In particular, accounting for their outstanding performance in object detection, we investigate the use of Mask R-CNN (Region Based Convolutional Neural Networks) as well as YOLOv3 (You Only Look Once) models for crop circle detection in the desert. In order to quantify the performance, we build a crop circles dataset from images extracted via Google Earth over a desert area in the East Oweinat in the South-Western Desert of Egypt. The dataset totals 2511 crop circle samples. With a small training set and a relatively large test set, plausible detection rates were obtained, scoring a precision of 1 and a recall of about 0.82 for Mask R-CNN and a precision of 0.88 and a recall of 0.94 regarding YOLOv3.

Highlights

  • Land use and land cover are two areas that have attracted ongoing interest in the remote sensing community

  • This paper investigated the use of deep learning for crop circle detection in the desert

  • A crop circle dataset was built from Google Earth images at 20 km altitude over the East Oweinat area in the South of Egypt

Read more

Summary

Introduction

Land use and land cover are two areas that have attracted ongoing interest in the remote sensing community. It is evident that remote sensing image classification and object detection remain the most active topics so far. Deep learning has been tailored to many scopes in remote sensing far [1,2,3,4,5]. The DBN is utilized as a feature reconstructor, where the most reconstructible features are selected for remote sensing scene classification. Cloud detection in remote sensing images was addressed in [9], where Simple Linear Iterative Clustering is used to infer superpixels from the input image. A patch-to-patch mapping was implemented within a deep learning architecture for remote sensing image registration in [10]. In [11], ternary change detection in Synthetic Aperture Radar data is addressed, where an autoencoder was used to learn meaningful features, followed by a three-group clustering. The resulting features are fed into a CNN for change classification

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call