Abstract

Infants learn to speak rapidly during their first years of life, gradually improving from simple vowel-like sounds to larger consonant-vowel complexes. Learning to control their vocal tract in order to produce meaningful speech sounds is a complex process which requires to learn the relationship between motor and sensory processes. In this paper, a computational framework is proposed that models the problem of learning articulatory control for a physiologically plausible 3-D vocal tract model using a developmentally-inspired approach. The system babbles and explores efficiently in a low-dimensional space of goals that are relevant to the learner in its synthetic environment. The learning process is goal-directed and self-organized, and yields an inverse model of the mapping between sensory space and motor commands. This study provides a unified framework that can be used for learning static as well as dynamic motor representations. The successful learning of vowel and syllable sounds as well as the benefit of active and adaptive learning strategies are demonstrated. Categorical perception is found in the acquired models, suggesting that the framework has the potential to replicate phenomena of human speech acquisition.

Highlights

  • Speech production is a complex motor task that requires the simultaneous coordination of dozens of muscles and extremely fast movements

  • Whereas the inverse model should have a low weight threshold such that newly discovered goal space positions are quickly integrated into the inverse model, the workspace model should only cluster a region in goal space when it can be reached with a certain proficiency

  • The two clusters that are closest to each other are /o/ and /u/. This distance acts as a normalization factor that ensures that the distance dWSM only falls below 1 when the distance to the closest workspace model cluster is smaller than the distance between /o/ and /u/

Read more

Summary

Introduction

Speech production is a complex motor task that requires the simultaneous coordination of dozens of muscles and extremely fast movements. State-of-the-art methods use large amounts of data to achieve remarkable performance [3, 4] Such systems are trained on databases often containing hundreds of millions of words [5, 6], while infants are estimated to experience only around 20–40 million words in their first 3 years [7, 8]. Knowledge about how speech is produced helps to predict natural deviations of speech that is caused, for instance, by differences in the anatomy of the speaker Using such inspiration from how humans learn to speak could benefit the development of speech recognition and production systems. Acknowledging the fundamental role of speech acquisition for human children, in recent years, a growing number of computational approaches have been suggested to model speech acquisition in a more developmental way. The source code for the framework, designed to support different types of vocal tracts, acoustic features and learning mechanisms, is available at GitHub: https://github.com/aphilippsen/goalspeech

Related Models
A Computational Model of Speech Acquisition
Motor Representation
Ambient Speech
Auditory Perception
Goal Space
Learning by Babbling
Babbling Cycle
Workspace Model
Target Selection and Active Learning
Adaptation of Exploration Noise
Evaluating Learning Progress
Results
Effect of Exploration and Noise Adaptation Strategy
Evaluating Smoothness and Linearity
Discussion
Categorical Perception of Speech
Developmental Change During Learning
Limitations and Future Research Directions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call