Abstract

Typical methods to study cognitive function are to record the electrical activities of animal neurons during the training of animals performing behavioral tasks. A key problem is that they fail to record all the relevant neurons in the animal brain. To alleviate this problem, we develop an RNN-based Actor–Critic framework, which is trained through reinforcement learning (RL) to solve two tasks analogous to the monkeys’ decision-making tasks. The trained model is capable of reproducing some features of neural activities recorded from animal brain, or some behavior properties exhibited in animal experiments, suggesting that it can serve as a computational platform to explore other cognitive functions. Furthermore, we conduct behavioral experiments on our framework, trying to explore an open question in neuroscience: which episodic memory in the hippocampus should be selected to ultimately govern future decisions. We find that the retrieval of salient events sampled from episodic memories can effectively shorten deliberation time than common events in the decision-making process. The results indicate that salient events stored in the hippocampus could be prioritized to propagate reward information, and thus allow decision-makers to learn a strategy faster.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.