Abstract

The entire set of binary vectors to be stored using a single-layer perceptron can be divided into two groups, one for which the output neuron state consistently equals one of the input neuron states and a second for which the output neuron state consistently negates the same input neuron. The capacity of the single-layer perceptron depends on the ratio between these two groups. This dependence is examined via statistical mechanical methods, producing the probability of obtaining a linearly separable solution for a random selection of input-output relations, for a given value of the above ratio. This probability is extremely useful for designing recurrent neural network training algorithms. These algorithms make use of the obtained results to select the most probable internal representations to be realized in such nets. Moreover, the distribution of the linearly separable binary functions enables one to obtain a good estimate for the total number of linearly separable binary functions for a certain number of input neurons, a task considered as a hard computational problem. Additional incentives for carrying out the calculation are understanding the capacity of simple nets for certain types of input-output correlations and laying the foundations for analysing some constructive training algorithms such as the tiling and upstart algorithms. All results show consistency with existing theoretical results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call