Abstract

Humpback whales produce prolonged vocalizations consisting of long fixed patterns of sound—elements of which are repeated with characteristic accuracy. The first known hydrophone recordings of these songs were made in 1952, but no objective technique of song classification is widely established. Toward this aim, an artificial neural network algorithm for the extraction and description of basic sound units is being developed. Samples of whale songs are converted into sonogram matrices whose rows correspond to frequencies on a logarithmic scale and whose columns are elemental time slices. Matrix values are quantified sound energy levels. Data preprocessed in this way is used to train a self-organizing feature mapping network which clusters the sounds into an ordered map of acoustic space. Comparison of the representative sound units extracted and encoded in this map is made with similar data (i) classified using human visual and aural impressions and (ii) traditional statistical clustering algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call