Abstract

Artificial neural networks suffer from catastrophic forgetting. Unlike humans, when these networks are trained on something new, they rapidly forget what was learned before. In the brain, a mechanism thought to be important for protecting memories is the reactivation of neuronal activity patterns representing those memories. In artificial neural networks, such memory replay can be implemented as ‘generative replay’, which can successfully – and surprisingly efficiently – prevent catastrophic forgetting on toy examples even in a class-incremental learning scenario. However, scaling up generative replay to complicated problems with many tasks or complex inputs is challenging. We propose a new, brain-inspired variant of replay in which internal or hidden representations are replayed that are generated by the network’s own, context-modulated feedback connections. Our method achieves state-of-the-art performance on challenging continual learning benchmarks (e.g., class-incremental learning on CIFAR-100) without storing data, and it provides a novel model for replay in the brain.

Highlights

  • Artificial neural networks suffer from catastrophic forgetting

  • We propose a new variant of Generative replay (GR) in which internal or hidden representations are replayed that are generated by the network’s own, context-modulated feedback connections

  • We demonstrate that this brain-inspired replay method achieves state-of-the-art performance on challenging continual learning benchmarks with many tasks (≥100) or complex inputs without the need to store data

Read more

Summary

Introduction

Artificial neural networks suffer from catastrophic forgetting. Unlike humans, when these networks are trained on something new, they rapidly forget what was learned before. In the brain—which clearly has implemented an efficient and scalable algorithm for continual learning—the reactivation of neuronal activity patterns that represent previous experiences is believed to be important for stabilizing new memories[8,9,10,11]. Such memory replay is orchestrated by the hippocampus and observed in the cortex[12,13], and mainly occurs in sharp-wave/ripples during both sleep and awake[14]. As alternative to storing data, here we focus on generating the data to be replayed with a learned generative neural network model of past observations[19,20,21] (Fig. 1b)

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.