Abstract

Multilayer neural networks are widely applied in fields of pattern recognition, speech processing, optimization problems, non-linear identification, non-linear adaptive control and other applications. They are trained usually by the error back-propagation algorithm. The main calculation problem of the algorithm is the goal function gradient searching implemented successively backward from the output layer. Two-layer neural networks can solve the approximation problem for a complicated non-linear function of many variables, as well as be effectively applied to automatic control problems, namely for the non-linear dynamic object identification. Calculation of the goal function gradient can be performed directly for two- layer neural networks, omitting the error back-propagation procedure, while a large number of calculations on each step remain. A training procedure simplified from the calculation point of view aimed at hardware implementation is suggested below for two-layer neural networks.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.