Abstract

Abstract Camera traps provide a low-cost approach to collect data and monitor wildlife across large scales but hand-labeling images at a rate that outpaces accumulation is difficult. Deep learning, a subdiscipline of machine learning and computer science, can address the issue of automatically classifying camera-trap images with a high degree of accuracy. This technique, however, may be less accessible to ecologists or small-scale conservation projects, and has serious limitations. In this study, we trained a simple deep learning model using a dataset of 120,000 images to identify the presence of nilgai Boselaphus tragocamelus, a regionally specific nonnative game animal, in camera-trap images with an overall accuracy of 97%. We trained a second model to identify 20 groups of animals and one group of images without any animals present, labeled as “none,” with an accuracy of 89%. Lastly, we tested the multigroup model on images collected of similar species, but in the southwestern United States, resulting in significantly lower precision and recall for each group. This study highlights the potential of deep learning for automating camera-trap image processing workflows, provides a brief overview of image-based deep learning, and discusses the often-understated limitations and methodological considerations in the context of wildlife conservation and species monitoring.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call