A Generalising Random Access Memory (G-RAM) neuron is distinguished from conventional neuron models by the fact that its tolerance to departures in similarity from its training pattern is variable. Details of this are given in this paper as it affects the behaviour a class of digital probabilistic neural networks which have been achieving attention in the neural networks literature for some years now. Such systems are also called n-tuple systems, weightless systems or p-RAM systems. After reviewing the literature on such networks, a novel simple combinatoric analysis of the most likely behaviour of recursive GRAM networks is described. The best network performance, measured by a key parameter called ‘radius of retrievability’ (first defined by Wong and Sherrington [J. Phys. A 22 (1989) 2233] as the error in the input that still allows evolution of the dynamic network to the correct attractor state), is obtained with a training set composed of random data patterns. Increasing the size of the training set reduces this radius of retrievability in a predictable manner. Changing the nature of the training set to non-random patterns also reduces the radius of retrievability to an extent that we show can be estimated from a measure of the diversity of the elements of the training set (we refer to this as the ‘mean intra-set Hamming distance of the training set’). As mentioned earlier the feature of G-RAMs (indicated by the G) is that there exists a generalization parameter which determines how far a neuron input vector can stray from a training input for the neuron to respond in the trained way. It is shown that when this generalization parameter is reduced, then the radius of retrievability is also reduced but it is then stable in the face of an increase in size, or change in nature, of the training set. This is a novel prediction of the behaviour of systems and of the robustness of such behaviour in the face of varying the size and correlation properties of the training set.
Read full abstract