Abstract

In this paper we explore the topic of the consolidation of information in neural network learning. One problem in particular has limited the ability of a broad range of neural networks to perform ongoing learning and consolidation. This is 'catastrophic forgetting', the tendency for new information, when it is learned, to disrupt old information. We will review and slightly extend the rehearsal and pseudorehearsal solutions to the catastrophic forgetting problem presented in Robins (1995). The main focus of this paper is to then relate these mechanisms to the consolidation processes which have been proposed in the psychological literature regarding sleep. We suggest that the catastrophic forgetting problem in artificial neural networks (ANNs) is a problem that has actually occurred in the evolution of the mammalian brain, and that the pseudorehearsal solution to the problem in ANNs is functionally equivalent to the sleep consolidation solution adopted by the brain. Finally, we review related work by McClelland et al. (1995) and propose a tentative model of learning and sleep that emphasizes consolidation mechanisms and the role of the hippocampus.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call