Abstract

Developmental cognitive systems can endow robots with the abilities to incrementally learn knowledge and autonomously adapt to complex environments. Conventional cognitive methods often acquire knowledge through passive perception, such as observing and listening. However, this learning way may generate incorrect representations inevitably and cannot correct them online without any feedback. To tackle this problem, we propose a biologically-inspired hierarchical cognitive system called Self-Organizing Developmental Cognitive Architecture with Interactive Reinforcement Learning (SODCA-IRL). The architecture introduces interactive reinforcement learning into hierarchical self-organizing incremental neural networks to simultaneously learn object concepts and fine-tune the learned knowledge by interacting with humans. In order to realize the integration, we equip individual neural networks with a memory model, which is designed as an exponential function controlled by two forgetting factors to simulate the consolidation and forgetting processes of humans. Besides, an interactive reinforcement strategy is designed to provide appropriate rewards and execute mistake correction. The feedback acts on the forgetting factors to reinforce or weaken the memory of neurons. Therefore, correct knowledge is preserved while incorrect representations are forgotten. Experimental results show that the proposed method can make effective use of the feedback from humans to improve the learning effectiveness significantly and reduce the model redundancy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call