Abstract

AbstractData collected by volunteers are an important source of information used in species management decisions, yet concerns are often raised over the quality of such data. Two major forms of error exist in occupancy datasets; failing to observe a species when present (imperfect detection—also known as false negatives), and falsely reporting a species as present (false‐positive errors). Estimating these rates allows us to quantify volunteer data quality, and may prevent the inference of erroneous trends. We use a new parameterization of a dynamic occupancy model to estimate and adjust for false‐negative and false‐positive errors, producing accurate estimates of occupancy. We validated this model using simulations and applied it to 12 species datasets collected from a 15‐year, large‐scale volunteer amphibian monitoring program. False‐positive rates were low for most, but not all, species, and accounting for these errors led to quantitative differences in occupancy, although trends remained consistent even when these effects were ignored. We present a model that represents an intuitive way of quantifying the quality of volunteer monitoring datasets, and which can produce unbiased estimates of occupancy despite the presence of multiple types of observation error. Importantly, this allows the quality of volunteer monitoring data to be assessed without relying on comparisons with expert data.

Highlights

  • In recognition of the potential for volunteers to allow costeffective data collection across large spatial scales, there has been a dramatic increase in citizen-science projects over recent years (Altwegg & Nichols, 2019; Silvertown, 2009)

  • We believe that false-positive dynamic occupancy models represent a good way of performing quality control on long-term volunteer monitoring programs, and can be used to mitigate issues caused by the presence of transient individuals in habitat patches

  • One of the main benefits of false-positive occupancy models is that they allow the quality of volunteer data to be assessed directly from the dataset, rather than by requiring comparisons against other information such as expert opinion, or the need to collect additional secondary datasets

Read more

Summary

| INTRODUCTION

In recognition of the potential for volunteers to allow costeffective data collection across large spatial scales, there has been a dramatic increase in citizen-science projects over recent years (Altwegg & Nichols, 2019; Silvertown, 2009). Subsequent developments have developed alternative solutions to this identifiability issue by utilizing extra information to inform the detection parameters This involves jointly analyzing the dataset of interest alongside a second, independent dataset at which a subset of sites are monitored using secondary detection methods in which the probability of false-positive observations is considered impossible (Chambert, Miller, & Nichols, 2015; Miller et al, 2011, 2013). These approaches have been successfully applied, performing calibration experiments to inform false-positive error rates in survey data is often impractical (though see McClintock, Bailey, Pollock, and Semons (2010) and RuizGutierrez et al (2016) for a successful application), and there may be situations where secondary datasets are not available For such cases, evaluating the quality of such monitoring datasets requires approaches which function without relying on restrictive constraints or extra data. We believe that false-positive dynamic occupancy models represent a good way of performing quality control on long-term volunteer monitoring programs, and can be used to mitigate issues caused by the presence of transient individuals in habitat patches

| METHODS
| RESULTS
| DISCUSSION
DATA ACCESSIBILITY
Findings
ETHICS STATEMENT
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call