Abstract

This paper presents a method for the automated acoustic assessment of bird vocalization activity using a machine learning approach. Acoustic biodiversity assessment methods use statistics from vocalizations of various species to infer information about the biodiversity. Manual annotations are accurate but time-consuming and therefore expensive, so automated assessment is desirable. Acoustic Diversity indices are sometimes used. These are computed directly from the audio and comparison between environments can provide insight about the ecologies. However, the abstract nature of the indices means that solid conclusions are difficult to reach and methods suffers from sensitivity to confounding factors such as noise. Machine learning based methods are potentially are more powerful because they can be trained to detect and identify species directly from audio. However, these algorithms require large quantities accurately labeled training data, which is, as already mentioned, non-trivial to acquire. In this work, a database of soundscapes with known levels of vocalization activity was synthesized to allow training of the algorithm. Comparisons show good agreement between manually annotated and automatic estimates of vocalization activity in simulations and data from a field survey.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.