Abstract

Today's acoustic monitoring devices are capable of recording and storing tremendous amounts of data. Until recently, the classification of animal vocalizations from field recordings has been relegated to qualitative approaches. For large-scale acoustic monitoring studies, qualitative approaches are very time-consuming and suffer from the bias of subjectivity. Recent developments in supervised learning techniques can provide rapid, accurate, species-level classification of bioacoustics data. We compared the classification performances of four supervised learning techniques (random forests, support vector machines, artificial neural networks, and discriminant function analysis) for five different classification tasks using bat echolocation calls recorded by a popular frequency-division bat detector. We found that all classifiers performed similarly in terms of overall accuracy with the exception of discriminant function analysis, which had the lowest average performance metrics. Random forests had the advantage of high sensitivities, specificities, and predictive powers across the majority of classification tasks, and also provided metrics for determining the relative importance of call features in distinguishing between groups. Overall classification accuracy for each task was slightly lower than reported accuracies using calls recorded by time-expansion detectors. Myotis spp. were particularly difficult to separate; classifiers performed best when members of this genus were combined in genus-level classification and analyzed separately at the level of species. Additionally, we identified and ranked the relative contributions of all predictor features to classifier accuracy and found measurements of frequency, total call duration, and characteristic slope to be the most important contributors to classification success. We provide recommendations to maximize accuracy and efficiency when analyzing acoustic data, and suggest an application of automated bioacoustics monitoring to contribute to wildlife monitoring efforts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call