Abstract

Large acoustic data sets are typically generated from ocean observations with a 160-element coherent hydrophone array and correspondingly larger volumes of acoustic detection events stem from coherent array processing. Beamforming enhances detection signal to-noise ratio, significantly improving detection ranges, as well as providing signal bearing. Here, we develop and train algorithms for the automatic detection and classification of baleen and toothed whale calls present in multiple beamformed spectrograms spanning 360 degree azimuths generated via the passive ocean acoustic waveguide remote sensing technique in the following six categories for the Gulf of Maine: Fin, Sei, Minke, Humpback, unidentified baleen whale downsweep chirps, and general toothed whale encompassing echolocation clicks and whistles below 4 kHz. The classifiers include random forest, support vector machine (SVM), and decision tree applied to hand-engineered features, as well as Convolutional Neural Network (CNN)-based model on the per-channel energy normalization transform (PCEN) applied directly to beamformed spectrogram imagery. Total accuracy of 95% and average F1-score of 85% are achieved using random forest classifier. The processing flow, including beamforming, PCEN extraction and call classification, run in real-time making the methods suitable for real-world applications, such as marine mammal monitoring and mitigation in ocean hydrocarbon prospecting and wind farm installations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call