Abstract

Public participation in research, or community science (CS), has an important role in advancing ecological research, especially data processing. CS contributions to camera trap studies have supported wildlife conservation through the rapid processing of images and videos. However, more studies are needed to quantify the accuracy and efficiency of CS participation. We used a case study from Chicago Wildlife Watch, a Zooniverse project, to explore variability in image classification accuracy and assess efficiency of responsive retirement rules which dictate how many times an image is viewed and annotated. We found that CS participants were highly accurate when classifying empty (96.0 %) and commonly photographed species in our study area (60.14 % across all species). User agreement on species images most impacted classification accuracy, though accuracy was higher for those containing larger species and those annotated by more engaged participants. With respect to efficiency, we found that three consecutive ‘empty’ classifications from participants led to over 95 % classification accuracy in empty images and if 7 participants agreed on a species present in an image, they were accurate 98 % of the time, on average. These results further support the value of CS in ecological research and the value of applying unique project designs which consider occurrence of regional species and field systems (e.g. camera placement or ecosystem). Given these results, we encourage scientists to continue applying quantitative techniques to custom design projects to effectively use CS participants' time and maximize data accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call