Abstract

It is shown that associative memory networks are capable of solving immediate and general reinforcement learning (RL) problems by combining techniques from associative neural networks and reinforcement learning and in particular Q-learning. The modified model is shown to outperform native RL techniques on a stochastic grid world task by developing correct policies. In addition, we formulated an analogous method to add feature extraction as dimensional reduction and eligibility traces as another mechanism to help solve the credit assignment problem. The network contrary to pure RL methods is based on associative memory principles such as distribution of information, pattern completion, Hebbian learning, and noise tolerance (limit cycles, one to many associations, chaos, etc). Because of this, it can be argued that the model possesses more cognitive explanative power than other RL or hybrid models. It may be an effective tool for bridging the gap between biological memory models and computational memory models.

Highlights

  • Associative Inemory (ANI) can be seen as a possible computational model of the brain and hmnan control for the reason that it appears to be one of the Inost important functions in many cognitive processes. 1v1any brain structures can be rnodeled as associative memories

  • This is evident by our ren1arkable ability at pattern recognition and pattern completion which A1v1 networks excels at. vVe know the brain is capable of many fonns of learning and supervised and unsupervised learning

  • In this chapter we have demonstrated a working n1odel of our systen1 on various sinlulations and problems in particular a stochastic Gridworld problen1 and a modified game of Tetris

Read more

Summary

Introduction

Associative Inemory (ANI) can be seen as a possible computational model of the brain and hmnan control for the reason that it appears to be one of the Inost important functions in many cognitive processes. 1v1any brain structures can be rnodeled as associative memories. 1v1any brain structures can be rnodeled as associative memories This is evident by our ren1arkable ability at pattern recognition and pattern completion which A1v1 networks excels at. VVe know the brain is capable of many fonns of learning and supervised and unsupervised learning If these theories are correct, our models must be able to perforn the various learning tasks a human could do including reinforcement learning. A case can be n1ade that associative n1en1ories are well suited to model human learning because of their dynamical properties These include the ability to exhibit attractor behaviour such as fixed-points, lilnit cycles and strange attractors which are essential to dealing with noisy inputs and have been supported by Sharda and Freeinan [3, 4] as fundamental to the way the brain stores and recalls infonnation

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.