Abstract

Accurate information on the spatial distribution of plant species and communities is in high demand for various fields of application, such as nature conservation, forestry, and agriculture. A series of studies has shown that CNNs accurately predict plant species and communities in high-resolution remote sensing data, in particular with data at the centimeter scale acquired with Unoccupied aerial vehicles (UAV). However, such tasks require ample training data to generate transferable CNN models. Reference data are commonly generated via geocoded in-situ observations or labeling of remote sensing data through visual interpretation. Both approaches are commonly laborious and can present a critical bottleneck for CNN applications. An alternative source of training data is given by using knowledge on the appearance of plants in the form of plant photographs from citizen science projects such as the iNaturalist database. Such crowd-sourced plant photos are expected to be very heterogeneous, and often show a different perspective compared to the typical bird-perspective of remote sensing data. Still, crowd-sourced plant photos could be a valuable source to overcome the challenge of limited training data and reduce the efforts for field data collection and data labeling. Here, we explore the potential of transfer learning from such a crowd-sourced data treasure to the remote sensing context. Therefore, we investigate firstly, if we can use crowd-sourced plant photos for CNN training and subsequent mapping of plant species in high-resolution remote sensing imagery. Secondly, we test if the predictive performance can be increased by a priori selecting photos that share a more similar perspective to the remote sensing data. Therefore, we used three case studies to test our proposed approach using multiple RGB orthoimages acquired from UAV for the target plant species Fallopia japonica (F. japonica), Portulacaria Afra (P. afra), and 10 different tree species, respectively. For training the CNN models, we queried the iNaturalist database for photos of the target species and the surrounding species that are expected in the areas of each case study. We trained CNN models with an EfficientNet-B07 backbone. For applying these models based on the crowd-sourced data to the remote sensing imagery, we used a sliding window approach with a 10 percent overlap. The individual sliding-window-based predictions were spatially aggregated in order to create a high-resolution classification map. Our results demonstrate that CNN models trained with heterogeneous, crowd-sourced plant photos can indeed predict the target species in UAV orthoimages with surprising accuracy. Filtering the crowd-sourced photos used for training by acquisition properties increased the predictive performance. This study demonstrates that citizen science data can effectively anticipate a common bottleneck for vegetation assessments and provides an example on how we can effectively harness the ever-increasing availability of crowd-sourced and big data for remote sensing applications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.