Abstract

In this paper, we discuss a computational model that is able to detect and build word-like representations on the basis of sensory input. The model is designed and tested with a further aim to investigate how infants may learn to communicate by means of spoken language. The computational model makes use of a memory, a perception module, and the concept of 'learning drive'. Learning takes place within a communicative loop between a 'caregiver' and the 'learner'. Experiments carried out on three European languages with different genetic background (Finnish, Swedish, and Dutch) show that a robust word representation can be learned in using less than 100 acoustic tokens (examples) of that word. The model is inspired by the memory structure that is assumed functional for human cognitive processing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call