Abstract
It is shown that associative memory networks are capable of solving immediate and general reinforcement learning (RL) problems by combining techniques from associative neural networks and reinforcement learning and in particular Q-learning. The modified model is shown to significantly outperform native RL techniques on a stochastic grid world task by developing correct optimal policies. The network contrary to pure RL methods is based on associative memory principles such as distribution of information, pattern completion, Hebbian learning, attractors, and noise tolerance. Because of this, it can be argued that the model possesses more cognitive explanative power than pure reinforcement learning methods or other hybrid models and can be an effective tool for bridging the gap between biological memory models and computational memory models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.