Abstract

This paper examines the function approximation properties of the "random neural-network model" or GNN. The output of the GNN can be computed from the firing probabilities of selected neurons. We consider a feedforward Bipolar GNN (BGNN) model which has both "positive and negative neurons" in the output layer, and prove that the BGNN is a universal function approximator. Specifically, for any f is an element of C([0, 1]s) and any epsilon>0, we show that there exists a feedforward BGNN which approximates f uniformly with error less than epsilon. We also show that after some appropriate clamping operation on its output, the feedforward GNN is also a universal function approximator.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call