Abstract

Small mammals, and particularly rodents, are common inhabitants of farmlands, where they play key roles in the ecosystem, but when overabundant, they can be major pests, able to reduce crop production and farmers’ incomes, with tangible effects on the achievement of Sustainable Development Goals no 2 (SDG2, Zero Hunger) of the United Nations. Farmers do not currently have a standardized, accurate method of detecting the presence, abundance, and locations of rodents in their fields, and hence do not have environmentally efficient methods of rodent control able to promote sustainable agriculture oriented to reduce the environmental impacts of cultivation. New developments in unmanned aerial system (UAS) platforms and sensor technology facilitate cost-effective data collection through simultaneous multimodal data collection approaches at very high spatial resolutions in environmental and agricultural contexts. Object detection from remote-sensing images has been an active research topic over the last decade. With recent increases in computational resources and data availability, deep learning-based object detection methods are beginning to play an important role in advancing remote-sensing commercial and scientific applications. However, the performance of current detectors on various UAS-based datasets, including multimodal spatial and physical datasets, remains limited in terms of small object detection. In particular, the ability to quickly detect small objects from a large observed scene (at field scale) is still an open question. In this paper, we compare the efficiencies of applying one- and two-stage detector models to a single UAS-based image and a processed (via Pix4D mapper photogrammetric program) UAS-based orthophoto product to detect rodent burrows, for agriculture/environmental applications as to support farmer activities in the achievements of SDG2. Our results indicate that the use of multimodal data from low-cost UASs within a self-training YOLOv3 model can provide relatively accurate and robust detection for small objects (mAP of 0.86 and an F1-score of 93.39%), and can deliver valuable insights for field management with high spatial precision able to reduce the environmental costs of crop production in the direction of precision agriculture management.

Highlights

  • One and two-stage detector models usable to detect small targets in agricultural backgrounds, from different input datasets, have been examined to identify an environmentally efficient method of rodent control to support the achievement of SDG2 and to promote sustainable agriculture

  • The focus of this study was on evaluating the performance of the suggested models on a single unmanned aerial system (UAS)-based image, a UAS-based orthophoto product processed with the Pix4D mapper photogrammetric program, and a multimodal product for agriculture/environmental applications

  • The contribution of the multimodal dataset to small object detection was assessed and reported

Read more

Summary

Introduction

2021, 13, 3191 climate change through the improvement of farm resource use efficiency and at the same time reducing the environmental impacts (e.g., plant biotic stress control). In many productive agricultural contexts, small mammals, and rodents are common inhabitants of farmlands playing key roles in the ecosystem. When overabundant they behave as pests, able to reduce crop production and farmers’ incomes and requiring the phytosanitary treatments of the field which are environmental and farmer costs [2,3]. An environmentally efficient method of rodent control able to promote sustainable agriculture, reducing the number and entity of treatments, oriented to reduce the environmental impacts of cultivation is not present

Objectives
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call