Abstract

Linear neuromorphic systems assume that stimuli are perceived as linear combinations of a set of underlying features represented as the eigenvectors of the stimulus space. Learning in this type of system is an autoassociative process that induces prototypes for each perceptual category. Anderson et al. [Psychol. Rev. 84, 413–451 (1977)] proposed that human listeners learn vowel categories through just this type of computational mechanism. To investigate this claim, a linear autoassociating network was trained on a set of prototypical American‐English vowels. Performance of this network was examined for learning and classifying vowels produced by an average male talker. In addition, the effects of different auditory coding representations were compared on recognition performance for the male vowels. Since it is necessary for the underlying feature space to be linearly independent, the perceptual representation of the vowels can affect learning. Furthermore, the extent to which the underlying feature space learned from a set of vowels is shared between talkers is investigated. For this type of network to be a plausible model of vowel perception, it must be capable of perceptual constancy across talkers. Finally, the effects of different learning algorithms on the development of vowel categories in perceptual space were compared. The results of these studies and their implications for simple, linear network models of speech perception are discussed.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.