Abstract

It is shown that any continuous function on any compact subnet of n-dimensional space can be computed by a simple feedforward neural network after adjusting a single real parameter. For a finite subset even a single unit network with continuous dependence on the parameter can do this. However, these dependencies on the parameters cannot be made continuously differentiable, so the typical gradient descent methods for parameter adjustment cannot be used. It is shown that the plausible and practical assumption on continuing differentiable dependence of a neural network on parameters introduces hidden limitations on abilities of such a structure. >

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call