Abstract

It is demonstrated, through theory and examples, how it is possible to construct directly and noniteratively a feedforward neural network to approximate arbitrary linear ordinary differential equations. The method, using the hard limit transfer function, is linear in storage and processing time, and the L 2 norm of the network approximation error decreases quadratically with the increasing number of hidden layer neurons. The construction requires imposing certain constraints on the values of the input, bias, and output weights, and the attribution of certain roles to each of these parameters. All results presented used the hard limit transfer function. However, the noniterative approach should also be applicable to the use of hyperbolic tangents, sigmoids, and radial basis functions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.