Abstract

The vocalization behavior of humpback whales was previously studied and mapped over instantaneous wide areas of the Gulf of Maine (GOM), spanning more than 100 km in diameter, using the passive ocean acoustic waveguide remote sensing technique during their Fall feeding season. Acoustic signals were received on a 160-element hydrophone array system where beamforming was employed to significantly improve signal-to-noise ratio of the received vocalizations. The humpback whale vocalizations can be divided into two classes, song and non-song calls. Song vocalizations are composed of repeatable set of phrases with consistently short inter-pulse intervals. The non-song vocalizations, such as ‘bow-shaped’ and ‘downsweep’ moans, have large and highly variable inter-pulse intervals and no repeatable pattern. Here we employ machine learning approaches to classify humpback whale vocalizations into song and non-song calls. Preprocessing methods including frequency filtering, wavelet denoising, beamforming, and spectral smoothing are applied. Several automated classification methods including Support Vector Machines, Gaussian Naive Bayes, and Neural Networks are explored. Implementation of these algorithms on the GOM dataset results in over 88% classification accuracy implying the machine learning approaches can be used in field studies for real-time classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call