Abstract

What is the smallest multilayer perceptron able to compute arbitrary and random functions? Previous results show that a net with one hidden layer containing N − 1 threshold units is capable of implementing an arbitrary dichotomy of N points. A construction is presented here for implementing an arbitrary dichotomy with one hidden layer containing [ N d ] units, for any set of N points in general position in d dimensions. This is in fact the smallest such net as dichotomies which cannot be implemented by any net with fewer units are described. Several constructions are presented of one-hidden-layer nets implementing arbitrary functions into the e-dimensional hypercube. One of these has only [ 4N d ][ e [ log 2 ( N d) ]] units in its hidden layer. Arguments based on a function counting theorem of Cover establish that any net implementing arbitrary functions must have at least Ne log 2(N) weights, so that no net with one hidden layer containing less than Ne/( d log 2( N)) units will suffice. Simple counts also show that if the weights are only allowed to assume one of n g possible values, no net with fewer than Ne log 2(n g) weights will suffice. Thus the gain coming from using real valued synapses appears to be only logarithmic. The circuit implementing functions into the e hypercube realizes such logarithmic gains. Since the counting arguments limit below only the number of weights, the possibility is suggested that, if suitable restrictions are imposed on the input vector set to avoid topological obstructions, two-hidden-layer nets with O(N) weights but only O(√ N) threshold units might suffice for arbitrary dichotomies. Interesting and potentially sufficient restrictions include (a) if the vectors are binary, i.e., lie on the d hypercube or (b) if they are randomly and uniformly selected from a bounded region.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call