Abstract

Neural networks with finitely many fixed weights have the universal approximation property under certain conditions on compact subsets of the d-dimensional Euclidean space, where approximation process is considered. Such conditions were delineated in our paper [26]. But for many compact sets it is impossible to approximate multivariate functions with arbitrary precision and the question on estimation or efficient computation of approximation error arises. This paper provides an explicit formula for the approximation error of single hidden layer neural networks with two fixed weights.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call