Abstract

Ecological and environmental citizen‐science projects have enormous potential to advance scientific knowledge, influence policy, and guide resource management by producing datasets that would otherwise be infeasible to generate. However, this potential can only be realized if the datasets are of high quality. While scientists are often skeptical of the ability of unpaid volunteers to produce accurate datasets, a growing body of publications clearly shows that diverse types of citizen‐science projects can produce data with accuracy equal to or surpassing that of professionals. Successful projects rely on a suite of methods to boost data accuracy and account for bias, including iterative project development, volunteer training and testing, expert validation, replication across volunteers, and statistical modeling of systematic error. Each citizen‐science dataset should therefore be judged individually, according to project design and application, and not assumed to be substandard simply because volunteers generated it.

Highlights

  • Datasets produced by volunteer citizen scientists can have reliably high quality, on par with those produced by professionals

  • While citizen-science projects vary widely in their subject matter, objectives, activities, and scale (Figures 2–4; Wiggins and Crowston 2015), one common goal is the production of reliable data that can be used for scientific purposes

  • The ecological and environmental sciences have been leaders in citizen science, boasting some of the longest-running projects that have contributed meaningful data to science and conservation, including the Cooperative Weather Service, the National Audubon Society’s Christmas Bird Count (1900; >200 publications have relied on the resulting dataset), the North American Breeding Bird Survey (1966; >670 publications), the leafing and flowering times of US lilacs and honeysuckles (1956; >50 publications; Rosemartin et al 2015), and the Butterfly Monitoring Scheme (1976; >100 publications)

Read more

Summary

Assessing data quality in citizen science

Ecological and environmental citizen-science projects have enormous potential to advance scientific knowledge, influence policy, and guide resource management by producing datasets that would otherwise be infeasible to generate. Many of the systematic biases in citizen-science data are the same biases that occur in professionally collected data: spatially and temporally non-random observations (biased by things such as time of day or week, weather, and human population density; eg Courter et al 2013), non-standardized capture or search effort, under-detection of organisms (Elkinton et al.2009; Crall et al 2011), confusion between similar-looking species, and the over- or underreporting of rare, cryptic, or elusive species as compared to more common ones (Gardiner et al 2012; Kelling et al 2015; Swanson et al 2016) Because these biases are found in professional ecological research, many methods have been developed for statistically controlling for and modeling them, as long as the relevant metadata are recorded (Bird et al 2014). In particular, have the potential to improve both data quality and project efficiency by routing content to the best individuals (Kamar et al 2012)

Conclusions
Findings
Measurements of abiotic environmental conditions at a given location

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.