Abstract

Image‐based modeling, and more precisely, Structure from Motion (SfM) and Multi‐View Stereo (MVS), is emerging as a flexible, self‐service, remote sensing tool for generating fine‐grained digital surface models (DSMs) in the Earth sciences and ecology. However, drone‐based SfM + MVS applications have developed at a rapid pace over the past decade and there are now many software options available for data processing. Consequently, understanding of reproducibility issues caused by variations in software choice and their influence on data quality is relatively poorly understood. This understanding is crucial for the development of SfM + MVS if it is to fulfill a role as a new quantitative remote sensing tool to inform management frameworks and species conservation schemes. To address this knowledge gap, a lightweight multirotor drone carrying a Ricoh GR II consumer‐grade camera was used to capture replicate, centimeter‐resolution image datasets of a temperate, intensively managed grassland ecosystem. These data allowed the exploration of method reproducibility and the impact of SfM + MVS software choice on derived vegetation canopy height measurement accuracy. The quality of DSM height measurements derived from four different, yet widely used SfM‐MVS software—Photoscan, Pix4D, 3DFlow Zephyr, and MICMAC, was compared with in situ data captured on the same day as image capture. We used both traditional agronomic techniques for measuring sward height, and a high accuracy and precision differential GPS survey to generate independent measurements of the underlying ground surface elevation. Using the same replicate image dataset (n = 3) as input, we demonstrate that there are 1.7, 2.0, and 2.5 cm differences in RMSE (excluding one outlier) between the outputs from different SfM + MVS software using High, Medium, and Low quality settings, respectively. Furthermore, we show that there can be a significant difference, although of small overall magnitude between replicate image datasets (n = 3) processed using the same SfM + MVS software, following the same workflow, with a variance in RMSE of up to 1.3, 1.5, and 2.7 cm (excluding one outlier) for “High,” “Medium,” and “Low” quality settings, respectively. We conclude that SfM + MVS software choice does matter, although the differences between products processed using “High” and “Medium” quality settings are of small overall magnitude.

Highlights

  • There is a pressing need within ecology for spatial data that can deliver information about ecosystem functional traits and their dynamics through time

  • Given the difference in variance in Root Mean Square Error (RMSE) for the replicate image datasets be‐ tween the software, we argue that it is likely that an important part of the variance is due to the ro‐ bustness2 of the Structure from Motion (SfM) + Multi‐ View Stereo (MVS) software

  • We argue that confidence in the fine‐grained resolution of drone and SfM + MVS‐based outputs in vegetated areas has been undermined both by lack of ground validation data captured at similar grain size, and diversity in workflows

Read more

Summary

| INTRODUCTION

There is a pressing need within ecology for spatial data that can deliver information about ecosystem functional traits and their dynamics through time. Due to the rapid and at times complex nature of eco‐ system dynamics, it is critical to have access to agile, effective, and re‐ producible methods for capturing key habitat or species traits such as canopy structure Such data can allow differentiation between early trends and short‐term fluctuations and can be used for identify‐ ing and establishing conservation sites with specific protected fea‐ tures (Fourcade & Öckinger, 2017). There is a need to quantify the influence of software on data quality, and yet to our knowledge, there have been no statistically robust in‐ vestigations of this type This makes it challenging to attribute dif‐ ferences in results to variations in the SfM + MVS‐based method (e.g., software used). The costs of different SfM + MVS software approaches are not significantly different in terms of learning, processing, and ana‐ lytical time as well as financial cost to the user

| MATERIALS AND METHODS
| DISCUSSION
| CONCLUSION
CONFLICT OF INTEREST
Findings
Graphical Abstract
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call