Abstract

In this paper, we propose a neural network model with a faster learning speed and a good approximate capability in the function approximation for solving worst-case identification of nonlinear systems H (infinity ) problems. Specifically, via the approximate transformable technique, we develop a Chebyshev Polynomials Based unified model neural network for solving the worst-case identification of nonlinear systems H (infinity ) problems. Based on this approximate transformable technique, the relationship between the single-layered neural network and multi-layered perceptron neural network is derived. It is shown that the Chebyshev Polynomials Based unified model neural network can be represented as a functional link network that is based on Chebyshev polynomials. We also derive a new learning algorithm such that the infinity norm of the transfer function from the input to the output is under a prescribed level. It turns out that the Chebyshev Polynomials Based unified model neural network not only has the same capability of universal approximator, but also has a faster learning speed than multi-layered perceptron or the recurrent neural network in the deterministic worst-case identification of nonlinear systems H (infinity ) problems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.