A model for the identification of speech sounds is proposed that assumes that (a) the acoustic cues are perceived independently, (b) feature evaluation provides information about the degree to which each quality is present in the speech sound, (c) each speech sound is denned by a propositional prototype in longterm memory that determines how the featural information is integrated, and (d) the speech sound is identified on the basis of the relative degree to which it matches the various alternative prototypes. The model was supported by the results of an experiment in which subjects identified stop-consonant-vowel syllables that were factorially generated by independently varying acoustic cues for voicing and for place of articulation. This experiment also replicated previous findings of changes in the identification boundary of one acoustic dimension as a function of the level of another dimension. These results have previously been interpreted as evidence for the interaction of the perceptions of the acoustic features themselves. In contrast, the present model provides a good description of the data, including these boundary changes, while still maintaining complete noninteraction at the feature evaluation stage of processing. Although considerable progress has been made in the field of speech perception in recent years, there is still much that is unknown about the details of how speech sounds are perceived and discriminated. In particular, while there has been considerable success in isolating the dimensions of acoustic information that are important in perceiving and identifying speech sounds, very little is known about how the information from the various acoustic dimensions is put together in order to actually accomplish identification. The present article proposes and tests a model of these fundamental integration processes that take place during speech perception. Much of the study of features in speech has focused on the stop consonants of English. The stop consonants are a set of speech sounds