Abstract
In the context of rapid urbanization, urban foresters are actively seeking management monitoring programs that address the challenges of urban biodiversity loss. Passive acoustic monitoring (PAM) has attracted attention because it allows for the collection of data passively, objectively, and continuously across large areas and for extended periods. However, it continues to be a difficult subject due to the massive amount of information that audio recordings contain. Most existing automated analysis methods have limitations in their application in urban areas, with unclear ecological relevance and efficacy. To better support urban forest biodiversity monitoring, we present a novel methodology for automatically extracting bird vocalizations from spectrograms of field audio recordings, integrating object-based classification. We applied this approach to acoustic data from an urban forest in Beijing and achieved an accuracy of 93.55% (±4.78%) in vocalization recognition while requiring less than ⅛ of the time needed for traditional inspection. The difference in efficiency would become more significant as the data size increases because object-based classification allows for batch processing of spectrograms. Using the extracted vocalizations, a series of acoustic and morphological features of bird-vocalization syllables (syllable feature metrics, SFMs) could be calculated to better quantify acoustic events and describe the soundscape. A significant correlation between the SFMs and biodiversity indices was found, with 57% of the variance in species richness, 41% in Shannon’s diversity index and 38% in Simpson’s diversity index being explained by SFMs. Therefore, our proposed method provides an effective complementary tool to existing automated methods for long-term urban forest biodiversity monitoring and conservation.
Highlights
Biodiversity loss has been a major and challenging problem globally, and is a potential risk factor for pandemics [1]
AutomatedBecause bird vocalization extraction approach based on Object-based image analysis (OBIA), sounds labelled we were unable to distinguish individual aniwhich followed a typical analysis workflow of bird vocalizations with three main steps mals based on their sounds, technicians identified and counted the total number of [59]: biological acoustic events (BEs) preprocessing, automated extraction, and feature calculation.and
With pre-processing denoising, we removed more than 85% (n = 77) of the anthropogenic acoustic events
Summary
Biodiversity loss has been a major and challenging problem globally, and is a potential risk factor for pandemics [1]. The ongoing global COVID-19 pandemic has confirmed this concern. The loss of biodiversity has generated conditions that favored the appearance of the virus and enabled the COVID-19 pandemic to surface [2,3]. To assess the success of PAP, the rapid and effective monitoring of urban biodiversity is key [5], which has led to a need for innovative investigation approaches. The need for expert knowledge and the substantial costs in terms of both money and time are major obstacles for any multi-taxa approach based on large-scale fieldwork [6]. The development of cost-effective and robust tools for monitoring urban forest biodiversity is a pressing need [1]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.