Abstract

We examine the function approximation properties of the "random neural network model" or GNN. We consider a feedforward bipolar GNN (BGNN) model which has both "positive (excitatory) and negative (inhibitory) neurons" in the output layer, and prove that the BGNN is a universal function approximator. Specifically, for any f/spl isin/C([0, 1]/sup s/) and any /spl epsiv/>0, we show that there exists a feedforward BGNN which approximates f uniformly with error less than /spl epsiv/. We also show that after a clamping operation on its output, the feedforward GNN is a universal continuous function approximator.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call