Abstract

It is shown that associative memory networks are capable of solving immediate and general reinforcement learning (RL) problems by combining techniques from associative neural networks and reinforcement learning and in particular Q-learning. The modified model is shown to outperform native RL techniques on a stochastic grid world task by developing correct policies. In addition, we formulated an analogous method to add feature extraction as dimensional reduction and eligibility traces as another mechanism to help solve the credit assignment problem. The network contrary to pure RL methods is based on associative memory principles such as distribution of information, pattern completion, Hebbian learning, and noise tolerance (limit cycles, one to many associations, chaos, etc). Because of this, it can be argued that the model possesses more cognitive explanative power than other RL or hybrid models. It may be an effective tool for bridging the gap between biological memory models and computational memory models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call