Abstract

Introduction The ability to perceive and comprehend speech is, as we argued in the previous chapter, one of the human brain's more astonishing evolutionary accomplishments. Engineers and computer scientists have sought to emulate human speech recognition for about four decades, but it may be as many more before the best automatic speech recognition device can perform as well as the average five-year-old, though great strides have been made in recent years. Recognizing speech and the identity of the speaker is an ability that we normally take for granted, unless we are unfortunate enough to lose this vital skill temporarily or permanently as the result of ‘a stroke’ (cerebrovascular accident) or some other form of damage to language-critical areas of the brain. In this and the following two chapters we will be concerned with the early or peripheral stages of spoken language comprehension: with auditory signal processing; with the extraction of phonetic features that make up the ‘sound shapes’ of words; with how the phonological forms of words are retrieved from lexical memory and how these ‘sound traces’ of words may be represented in the recognition lexicon. We will leave to later chapters questions of how words are put together to form phrases or sentences, which belong to later stages of the spoken language comprehension process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call