Abstract

Newly born infants are able to finely discriminate almost all human speech contrasts and their phonemic category boundaries are initially identical, even for phonemes outside their target language. A connectionist model is described, which accounts for this ability. The approach taken has been to develop a model of innately guided learning in which an artificial neural network (ANN) is stored in a ''genome" which encodes its architecture and learning rules. The space of possible ANNs is searched with a genetic algorithm for networks that can learn to discriminate human speech sounds. These networks perform equally well when they have been trained on speech spectra from any human language so far tested (English, Cantonese, Swahili, Farsi, Czech, Hindi, Hungarian, Korean, Polish, Russian, Slovak, Spanish, Ukranian, and Urdu). Training the evolved networks requires exposure to just two minutes of speech in any of these languages. Categorisation of speech sounds based on the network representations shows the hallmarks of categorical perception. Phoneme confusability in the network replicates earlier studies of phoneme confusability in adults. The network model offers an epigenetic account of the rapid emergence of speech perception skills in young infants whereby innately specified neural systems exploit regularities in the speech signal to construct representations that are well-suited to the identification of speech segments. The model also suggests how infants' early preferential attention to speech is driven by the rapid construction of suitable representations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call