Abstract

Statistical learning mechanisms play an important role in theories of language acquisition and processing. Recurrent neural network models have provided important insights into how these mechanisms might operate. We examined whether such networks capture two key findings in human statistical learning. In Simulation 1, a simple recurrent network (SRN) performed much like human learners: it was sensitive to both transitional probability and frequency, with frequency dominating early in learning and probability emerging as the dominant cue later in learning. In Simulation 2, an SRN captured links between statistical segmentation and word learning in infants and adults, and suggested that these links arise because phonological representations are more distinctive for syllables with higher transitional probability. Beyond simply simulating general phenomena, these models provide new insights into underlying mechanisms and generate novel behavioral predictions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call