Abstract

In this book we discussed the use of artificial neural networks for modelling and control of nonlinear systems in a systemtheoretical context. After a short introduction on neural information processing systems in Chapter 1, we have reviewed basic neural network architectures and their learning rules in Chapter 2, for feedforward as well as recurrent networks. In Chapter 3 we have treated the problem of nonlinear system identification using neural networks. Existing models such as NARX and NARMAX were discussed and neural state space models are introduced. Off- and on-line learning algorithms are presented. An interpretation of neural network models as uncertain linear systems, representable as linear fractional transformations, has been given. Examples on nonlinear system identification of a simulated nonlinear system with hysteresis, a glass furnace with real data and chaotic systems, show the effectiveness of neural state space models. In Chapter 4 a short overview of neural control strategies is given. Neural optimal control has been discussed in more detail. The stabilization problem and tracking problem are discussed. Furthermore it is shown how results from linear control theory can be used as constraint on the neural control design, in order to achieve local stability at a target point. The latter method has been successfully applied to the problems of swinging up an inverted and double inverted pendulum. In Chapter 5 we introduced a neural state space model and control framework with stability criteria. Closed loop systems were transformed into NLq system forms. Sufficient conditions for global asymptotic stability and I/O stability with finite L2-gain are derived. Links with H∞ control and μ theory are revealed. The criteria are formulated as linear matrix inequalities. NL q theory has been applied to the control of several types of nonlinear behaviour, including chaos and to the real life example of controlling nonlinear distortion in electrodynamic loudspeakers. Furthermore, several types of recurrent neural networks are represented as NL q s, such as generalized cellular neural networks, multilayer Hopfield networks and locally recurrent globally feedforward networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call