Abstract

We provide a radically elementary proof of the universal approximation property of the one-hidden layer perceptron based on the Taylor expansion and the Vandermonde determinant. It works for both L q and uniform approximation on compact sets. This approach naturally yields some bounds for the design of the hidden layer and convergence results (including some rates) for the derivatives. A partial answer to Hornik's conjecture on the universality of the bias is proposed. An extension to vector valued functions is also carried out. © 1997 Elsevier Science Ltd.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call