Abstract

Attractors of nonlinear neural systems are at the core of the memory self-refreshing mechanism of human memory models that suppose memories are dynamically maintained in a distributed network [Ans, B., and Rousset, S. (1997), ‘Avoiding Catastrophic Forgetting by Coupling Two Reverberating Neural Networks’ Comptes Rendus de l'Académie des Sciences Paris, Life Sciences, 320, 989–997; Ans, B., and Rousset, S. (2000), ‘Neural Networks with a Self-Refreshing Memory: Knowledge Transfer in Sequential Learning Tasks Without Catastrophic Forgetting’, Connection Science, 12, 1–19; Ans, B., Rousset, S., French, R.M., and Musca, S.C. (2002), ‘Preventing Catastrophic Interference in Multiple-Sequence Learning Using Coupled Reverberating Elman Networks’, in Proceedings of the 24th Annual Meeting of the Cognitive Science Society, eds. W.D. Gray and C.D. Schunn, Mahwah, NJ: Lawrence Erlbaum Associates, pp. 71–76; Ans, B., Rousset, S., French, R.M., and Musca, S.C. (2004), ‘Self-Refreshing Memory in Artificial Neural Networks: Learning Temporal Sequences Without Catastrophic Forgetting’, Connection Science, 16, 71–99; Ans, B. (2004), ‘Sequential Learning in Distributed Neural Networks Without Catastrophic Forgetting: A Single and Realistic Self-Refreshing Memory can do it’, Neural Information Processing-Letters and Reviews, 4, 27–32]. Are humans able to learn never seen items from attractor patterns generated by a highly distributed artificial neural network? First, an opposition method was implemented to ensure that the attractors are not the items used to train the network, the source items: attractors were selected to be more similar (both at the exemplar and the centroïd level) to some control items than to the source items. In spite of this very severe selection, blank networks trained only on selected attractors performed better at test on the never seen source items than on the never seen control items. The results of two behavioural experiments using the opposition method show that humans exhibit more familiarity with the never seen source items than with the never seen control items, just as networks do. Thus, humans are sensitive to the particular type of information that allows distributed artificial neural networks to dynamically maintain their memory, and this information does not amount to the exemplars used to train the network that produced the attractors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call