Abstract

It is shown that associative memory networks are capable of solving immediate and general reinforcement learning (RL) problems by combining techniques from associative neural networks and reinforcement learning and in particular Q-learning. The modified model is shown to significantly outperform native RL techniques on a stochastic grid world task by developing correct optimal policies. The network contrary to pure RL methods is based on associative memory principles such as distribution of information, pattern completion, Hebbian learning, attractors, and noise tolerance. Because of this, it can be argued that the model possesses more cognitive explanative power than pure reinforcement learning methods or other hybrid models and can be an effective tool for bridging the gap between biological memory models and computational memory models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call