Abstract

We describe a model of speech perception [based on the Interactive Activation Model of Visual Word Perception (cf. McClelland and Rumelhart, in press; Rumelhart and McClelland, in press)] in which excitatory and inhibitory interactions among nodes for phonetic features, phonemes, and words are used to account for aspects of the interaction of bottom-up and top-down processes in perception of speech. Results from a working computer simulation of this model are presented. Input to the program consists of specifications of distinctive features of speech as they unfold in time. Features, phonemes, and words consistent with the input are activated, missing specifications may be filled in, and slight errors may be corrected so that the “percept” formed by the simulation exhibits such phenomena as phonemic restoration and related perceptual effects. [Work supported by NSF.]

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call