Abstract
AbstractAn automated algorithm for passive acoustic detection of blue whale D‐calls was developed based on established deep learning methods for image recognition via the DenseNet architecture. The detector was trained on annotated acoustic recordings from the Antarctic, and performance of the detector was assessed by calculating precision and recall using a separate independent dataset also from the Antarctic. Detections from both the human analyst and automated detector were then inspected by an independent judge to identify any calls missed by either approach and to adjudicate whether the apparent false‐positive detections from the automated approach were actually true positives. A final performance assessment was conducted using double‐observer methods (via a closed‐population Huggins mark–recapture model) to assess the probability of detection of calls by both the human analyst and automated detector, based on the assumption of false‐positive‐free adjudicated detections. According to our double‐observer analysis, the automated detector showed superior performance with higher recall and fewer false positives than the original human analyst, and with performance similar to existing top automated detectors. To understand the performance of both detectors we inspected the time‐series and signal‐to‐noise ratio (SNR) of detections for the test dataset, and found that most of the advantages from the automated detector occurred at low and medium SNR.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.