Abstract

Traditional approaches to analyzing vocal sequences typically involve identifying individual sound units, labeling identified sounds, and describing the regularity of label sequences [A. Kershenbaum et al., “Acoustic sequences in non-human animals: A tutorial review and prospectus,” Biol. Rev. (2014)]. Although this method can provide useful information about the structure of sound sequences, the criteria for determining when distinct units have been successfully classified are often subjective and the temporal dynamics of sound generation are usually ignored. Self-organizing maps (SOMs) provide an alternative approach to classifying inputs that does not require subjective sorting or isolation of units. For instance, SOMs can be used to classify fixed duration frames sampled from recordings. Once an SOM has been trained to sort frames, the temporal structure of a vocal sequence can be analyzed by training a second SOM to sort spatiotemporal patterns of activation within the frame-sorting SOM. Analyzing humpback whale “song” using this technique revealed that: (1) perceptually warped spectra from frames varied uniformly along several continua; (2) a subset of frame patterns (sound types) was more prevalent; and (3) produced features varied systematically as a function of sequential position within a song for some sounds, but not others.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call