Abstract

Humpback whales produce songs which consist of a sequence of short, continuous sounds known as units. This paper introduces an automated algorithm to extract the unit contours. An unsupervised classification is developed to provide a set of distinct units of the singing group. The analysis is performed on the vocalization spectrograms, which are normalized and interpolated into a squared time-frequency image. Unit contours are detected using two edge detection filters capturing sharp changes in the image intensities. The algorithm generates a group of rectangular image segments each containing a single contour unit, with the pixels outside the contour edge lines set to zero. The contours are compared with one another to identify distinct units. The comparison is quantified using parameters including the contour pixel intensity correlation, contour area, frequency range, and frequency of the peak pixel. A pairwise comparison provides a coarse division of classes, where each class is then represented by a candidate unit. The candidate units are compared with one another, and the ones with low similarity are advanced to the final set. The algorithm has been tested on humpback whale songs obtained during the winter season in Hawaiian waters in 2002 and 2003.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call