Abstract
The application of neural networks for the synthesis of control systems is considered. Examples of synthesis of control systems using methods of reinforcement learning, in which the state vector is involved, are given. And the synthesis of a neural controller for objects with an inaccessible state vector is discussed: 1) a variant using a neural network with recurrent feedbacks; 2) a variant using the input error vector, where each error (except for the first one) enters the input of the neural network passing through the delay element. The disadvantages of the first method include the fact that for such a structure of a neural network it is not possible to apply existing learning methods with confirmation and for training it is required to use a data set obtained, for example, from a previously calculated linear controller. The structure of the neural network used in the second option allows the application of reinforcement learning methods, but the article provides a statement and its proof that for the synthesis of a control system for objects with three or more integrators, a neural network without recurrent connections cannot be used. The application of the above structures is given on examples of the synthesis of control systems for objects 1/s2 and 1/s3 presented in a discrete form.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Transaction of Scientific Papers of the Novosibirsk State Technical University
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.