Abstract

Learning the meaning of words in noisy contexts with multiple unknown words in an utterance and multiple unknown objects in a scene is a typical part of language acquisition for infants. However, incremental word learning in ambiguous contexts is a challenging problem in artificial intelligence. While past models of cross-situational word learning benefit from full access to all learning situations and their statistical regularities to arrive at the right hypothesis, it is cognitively implausible for children to remember all word learning situations they encounter. Hence, we present an incremental Bayesian model of cross-situational word learning with limited access to past situations and demonstrate its superior performance compared to other baseline incremental models, especially under conditions of sensory noise in the speech and visual modalities. Then we embed our model in a cognitive robotic architecture and demonstrate the first robotic model capable of incremental cross-situational word learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call