Abstract

Humans acquire language and related concepts in a trajectory over a lifetime. Concepts for simple interaction with the world are learned before language. Later, words are learned to name these concepts along with structures needed to represent larger meanings. Eventually, language advances to where it can drive the learning of new concepts. Throughout this trajectory a language processing capability uses architectural mechanisms to process language using the knowledge already acquired. We assume that this growing body of knowledge is made up of small units of form-meaning mapping that can be composed in many ways, suggesting that these units are learned incrementally from experience. In prior work we have built a system to comprehend human language within an autonomous robot using knowledge in such units developed by hand. Here we propose a research program to develop the ability of an artificial agent to acquire this knowledge incrementally and autonomously from its experience in a similar trajectory. We then propose a strategy for evaluating this human-like learning system using a large benchmark created as a tool for training deep learning systems. We expect that our human-like learning system will produce better task performance from training on only a small subset of this benchmark.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call