Abstract

AbstractHow can agents, natural or artificial, learn about the external environment based only on its internal state (such as the activation patterns in the brain)? There are two problems involved here: first, forming the internal state based on sensory data to reflect reality, and second, forming thoughts and desires based on these internal states. (Aristotle termed these passive and active intellect, respectively [1].) How are these to be accomplished? Chapters in this book consider mechanisms of the instinct for learning (chapter PERLOVSKY) and reinforcement learning (chapter IFTEKHARUDDIN; chapter WERBOS), which modify the mind’s representation for better fitting sensory data. Our approach (as those in chapters FREEMAN and KOZMA) emphasizes the importance of action in this process. Action plays a key role in recovering sensory stimulus properties that are represented by the internal state. Generating the right kind of action is essential to decoding the internal state. Action that maintains invariance in the internal state are important as it will have the same property as that of the represented sensory stimulus. However, such an approach alone does not address how it can be generalized to learn more complex object concepts. We emphasize that the limitation is due to the reactive nature of the sensorimotor interaction in the agent: lack of long-term memory prevents learning beyond the basic stimulus properties such as orientation of the input. Adding memory can help the learning of complex object concepts, but what kind of memory should be used and why? Themain aim of this chapter is to assess the merit of memory of action sequence linked with a particular spatiotemporal pattern (skill memory), as compared to explicit memory of visual form (visual memory), all within an object recognition domain. Our results indicate that skill memory is (1) better than visual memory in terms of recognition performance, (2) robust to noise and variations, and (3) better suited as a flexible internal representation. These results suggest that the dynamic nature of skill memory, with its involvement in the closure of the agent-environment loop, provides a strong basis for robust and autonomous object concept learning.KeywordsMean Square ErrorInternal StateAction SequenceVisual MemoryNoise FactorThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.