Learning and associative memory are concerned with storing and retrieving activity patterns of a neuronal net. It is considered to be a minimal requirement that the number of patterns that can be stored faithfully is extensive, i.e. is at least proportional to the number of neighbours each neuron interacts with. The main drawback of Hebbian learning, and of any one-shot local learning procedure, is that it cannot store extensively many patterns with activities which vary from pattern to pattern because, being local, it cannot discern global correlations. We critically review the performance of Hebbian unlearning—also proposed as a model of REM sleep—in a network of formal neurons with a distribution of axonal delays. Hebbian unlearning, though as local and unsupervised as Hebbian learning, eliminates undesirable global correlations, handles any spatio-temporal pattern, and improves the network performance greatly—sometimes even saturating a theoretical upper bound. Furthermore, it is shown that unlearning has an optimal and critical number of unlearning loops (‘dreams’) at and above which, respectively, storage and retrieval are optimal and break down completely. In applications, ‘blinking’ is a most useful property: after having unlearned stationary patterns, the network, when presented with a noisy pattern, either converges to the original one or to a blinking state signalling that the final pattern has not been learned. In a daily context, data acquisition and REM sleep alternate. Numerical simulations have shown that, alternating between learning bundles of patterns and unlearning, a network only stores and retrieves the most recently learned data and ‘forgets’ the past. A key property of unlearning being the elimination of undesirable correlations, the process would give a straightforward explanation of Freudian condensation. In other words, the unlearning viewpoint offers a comprehensive perspective.
Read full abstract