Abstract

Much effort has previously been spent in investigating the decision making/object identification capabilities of feedforward neural networks. In the present work we examine the less frequently investigated abilities of such networks to implement computationally useful operations in arithmetic and function evaluation. The approach taken is to employ standard training methods, such as backpropagation, to teach simple three-level networks to perform selected operations ranging from one-to-one mappings to many-to-many mappings. Examples considered cover a wide range, such as performing reciprocal arithmetic on real valued inputs, implementing particle identifier functions for identification of nuclear isotopes in scattering experiments, and locating the coordinates of a charged particle moving on a surface. All mappings are required to interpolate and extrapolate from a small sample of taught exemplars to the general continuous domain of possible inputs. A unifying principle is proposed that looks upon all such function constructions as expansions in terms of basis functions, each of which is associated with a hidden node and is parameterized by such techniques as gradient descent methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.