Abstract

Simulations of the human hearing system can help in a number of research fields; including work with the speech and hearing impaired as well as improving the accuracy of speech recognition systems and the naturalness of speech synthesis technology. The results from psychoacoustic experiments carried out over the last few decades have enabled models of the human peripheral hearing system to be developed. Conventionally, analyses such as the Fast Fourier Transform are used to analyze speech and other sounds to establish the acoustic cues which are important for human perception. Such analyses can be shown to be inappropriate in a number of ways. Additional insights could be gained into the importance of various acoustic cues if analyses based on hearing models were used. This paper describes an implementation of a real-time spectrograph based on a contemporary model of the peripheral human hearing system, executing on a network of T9000 transputers. The differences between it and conventional spectrographs are illustrated by means of test signals and speech sounds.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call