Abstract

Abstract The accurate identification of species in images submitted by citizen scientists is currently a bottleneck for many data uses. Machine learning tools offer the potential to provide rapid, objective and scalable species identification for the benefit of many aspects of ecological science. Currently, most approaches only make use of image pixel data for classification. However, an experienced naturalist would also use a wide variety of contextual information such as the location and date of recording. Here, we examine the automated identification of ladybird (Coccinellidae) records from the British Isles submitted to the UK Ladybird Survey, a volunteer‐led mass participation recording scheme. Each image is associated with metadata; a date, location and recorder ID, which can be cross‐referenced with other data sources to determine local weather at the time of recording, habitat types and the experience of the observer. We built multi‐input neural network models that synthesize metadata and images to identify records to species level. We show that machine learning models can effectively harness contextual information to improve the interpretation of images. Against an image‐only baseline of 48.2%, we observe a 9.1 percentage‐point improvement in top‐1 accuracy with a multi‐input model compared to only a 3.6% increase when using an ensemble of image and metadata models. This suggests that contextual data are being used to interpret an image, beyond just providing a prior expectation. We show that our neural network models appear to be utilizing similar pieces of evidence as human naturalists to make identifications. Metadata is a key tool for human naturalists. We show it can also be harnessed by computer vision systems. Contextualization offers considerable extra information, particularly for challenging species, even within small and relatively homogeneous areas such as the British Isles. Although complex relationships between disparate sources of information can be profitably interpreted by simple neural network architectures, there is likely considerable room for further progress. Contextualizing images has the potential to lead to a step change in the accuracy of automated identification tools, with considerable benefits for large‐scale verification of submitted records.

Highlights

  • Large-scale and accurate biodiversity monitoring is a cornerstone of understanding ecosystems and human impacts upon them (IPBES, 2019)

  • We show that machine learning models can effectively harness contextual information to improve the interpretation of images

  • We examine if metadata can significantly improve classification accuracy, thereby increasing their potential to assist in large-scale biodiversity monitoring, by the following: 1. Comparing the classification accuracy of classifiers incorporating metadata compared to image-only classifiers

Read more

Summary

Introduction

Large-scale and accurate biodiversity monitoring is a cornerstone of understanding ecosystems and human impacts upon them (IPBES, 2019). Recent advances in artificial intelligence have revolutionized the outlook for automated tools to provide rapid, scalable, objective and accurate species identification and enumeration (Torney et al, 2019; Wäldchen & Mäder, 2018; Weinstein, 2018; Willi et al, 2019). At present, general-purpose automated classification of animal species is still some distance from the level of accuracy obtained by human experts. While image acquisition by researchers can be directly controlled and lead to high accuracies (Marques et al, 2018; Rzanny, Seeland, Wäldchen, & Mäder, 2017), images from citizen science projects are highly variable and pose considerable challenges for computer vision (Van Horn et al, 2017)

Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call