Abstract

Neural networks have been proposed as an ideal cognitive modeling methodology to deal with the symbol grounding problem. More recently, such neural network approaches have been incorporated in studies based on cognitive agents and robots. In this paper we present a new model of symbol grounding transfer in cognitive robots. Language learning simulations demonstrate that robots are able to acquire new action concepts via linguistic instructions. This is achieved by autonomously transferring the grounding from directly grounded action names to new higher-order composite actions. The robot's neural network controller permits such a grounding transfer. The implications for such a modeling approach in cognitive science and autonomous robotics are discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call