Abstract

An automated algorithm for passive acoustic detection of blue whale D-calls is developed based on established deep learning methods for image recognition via the DenseNet architecture. Koogu—an open-source Python package—was used for developing the detector. The detector was trained on annotated acoustic recordings from the Antarctic, and the performance of the detector was assessed by calculating precision and recall using a separate independent dataset also from the Antarctic. Detections from both the human analyst and automated detector were then inspected by a more experienced analyst to identify any calls missed by either approach and to adjudicate whether the apparent false-positive detections from the automated approach were actually true-positives. Lastly, an additional performance assessment was conducted using double-platform methods (via a closed-population Huggins mark recapture model) to assess the probability of detection of both the human analyst and automated detector, based on the assumption of false-positive-free and reconciled detections. According to our double-platform analysis, the automated detector performed very well with higher recall and fewer false-positives that the original human analyst.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call