Abstract

We describe certain results of Mhaskar concerning the approximation capabilities of neural networks with one hidden layer. In particular, these results demonstrate the construction of neural networks evaluating a squashing function or a radial basis function for optimal approximation of the Sobolev spaces. We also report on the application of some of these ideas in the construction of general-purpose networks for the prediction of time series, when the number of independent variables is known in advance, such as the Mackey-Glass series or the flour data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.