Abstract

Parsing the auditory scene is a problem faced by humans and animals alike. Characteristics such as frequency, intensity, and location all help organisms assign concurrent sounds to specific auditory objects. The timing of sounds is also important for object perception. When sounds totally overlap in time, ascribing separate sounds to individual objects is difficult. However, slight temporal separations make this task slightly easier. In humans, synchronous streams of high and low frequency tones are heard as a single auditory stream. When the tones are slightly offset in time from one another, a second stream emerges. Here, we compared the perception of simultaneous, asynchronous, and partially overlapping streams of tones, human speech sounds, and budgerigar (Melopsittacus undulatus) contact calls in budgerigars and humans using operant conditioning methods. Human and bird subjects identified the partially overlapping stimuli differentially. Both species required less temporal separation to identify the sounds as “asynchronous” for the complex stimuli than for the pure tones. Interestingly, the psychometric functions differed between the two species. These results suggest that both humans and nonhumans are capable of using temporal offsets for assigning auditory objects, and that the ability to do this depends on the spectrotemporal characteristics of the sounds.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call