Abstract

Ripple noise is produced when a broadband noise is delayed (T) and either added to or subtracted from the original noise. The resulting waveform has a power spectrum in which power varies cosinusoidally as a function of frequency. Ripple noise has been used to study pitch perception and echo processing. In the present experiments subjects were asked to discriminate between two noises which differed in the amount of delay introduced (one stimulus contained a delay, T, the second stimulus a delay, T + 0.1T). The discriminability of these two ripple noises was studied as a function of the amount of attenuation added to the delayed noise. Increases in attenuation of the delayed noise decreases the strength of the pitch of ripple noise. Delays (T) of 1, 2, 5, and 10 msec were used to produce ripple noises which were passed through very steep, one-third octave, digital filters centered at 500, 1000, 2000, and 4000 Hz. The pitch of the ripple noise produced with any delay (T) was strongest (i.e., the greatest amount of attenuation was added to the delayed noise before discrimination performance was near threshold) when the center frequency of the filters were at four times 1/T. For center frequencies lower or higher than 4×1/T the pitch was weaker. The pitch of the ripple noise when the delayed noise was substracted from the undelayed noise was weaker than when the delayed noise was added to the undelayed noise. These data will be discussed in terms of recent models of pitch perception and the assumption that pitch is determined in a dominant spectral region. [Work supported by NSF.]

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call