Abstract

Citizen scientist platforms such as eBird and iNaturalist are dramatically increasing the biodiversity data available for scientific research. Questions remain about the validity of data collected by people with undefined credentials. However, few studies have examined the data quality of citizen science studies in detail. As part of an autumn orientation program, the Honors College at UMass Boston invited incoming students for a retreat on Thompson Island in Boston Harbor Islands National and State Park. One of their activities was a three-hour bioblitz using iNaturalist. We reviewed data collected from three autumn orientations (2017–2019) to evaluate the quality of the data and to examine the hypothesis that first-time users can contribute useful biodiversity observations. The students collected more than 2,000 observations and uploaded more than 5,700 photographs, mostly of plants (about 50%) and animals (40%). Approximately 50% of the observations became Research Grade by iNaturalist criteria. Errors in GPS data (ca 1–4%) did not always place observations automatically in the project. First-time users, presumably because they are digital natives and have experience with cell phone cameras, quickly master the basics of iNaturalist. We conclude that students using the iNaturalist platform, with a crowd-sourced ID process, produce data that are useful for a variety of biodiversity studies.

Highlights

  • The promise of citizen science (CS) holds that it can both educate its participants and provide useful scientific data (Bonney et al 2009; Raddick et al 2009; Bonney et al 2016)

  • During the course of the rubric development and scoring, we discovered that some observations made by students during the orientation program were not included in the project by the iNaturalist algorithm, a possibility we had not originally considered

  • Readers should be aware that we found that all the numbers in Table 2 changed over time because of data curation by project managers, individuals withdrawing their observations, and more identifications being made over time

Read more

Summary

Introduction

The promise of citizen science (CS) holds that it can both educate its participants and provide useful scientific data (Bonney et al 2009; Raddick et al 2009; Bonney et al 2016). Science educators instruct students using well-thought-out exercises instead of doing authentic science (but see studies of class-based or course-based undergraduate research experiences such as Spell et al 2017 and Flaherty et al 2017). These carefully planned demonstrations and laboratory experiments are tested and revised so they “work.” In contrast, the process of doing science is not straight forward, involving missteps and failed experiments (Understanding Science 2021) that necessarily slow down learning science content and that can be challenging to undertake in classrooms (Spell et al 2017, Harlin et al 2018)

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.