Abstract

Artificial neural networks experience serious catastrophic forgetting (or interference) when information is learned sequentially. A significant effort in the machine learning community is devoted to the solution of this problem. Many approaches to overcome the catastrophic interference (CI) find parallels with an organization of the human memory system. In this paper, we provide a review of biologically inspired approaches for CI prevention. The main emphasis is made on the development of methods inspired by generative properties of the brain. We developed and tested several methods for preventing CI using an artificial dataset generated on the base of previous experience of neural network. The proposed methods include the activation maximization approach, the method based on Bayesian learning, and the method based on generative neural networks. The methods based on a combination of episodic memory (several stored samples) and semantic memory (sampling of posterior probability function) show superiority compared to other recent methods devoted to CI prevention. Based on generative approaches, the biologically plausible mechanisms of active forgetting and memory reconsolidation are also demonstrated. The proof of concept experiments were performed on several publicly available datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call