Abstract

In this paper, neural networks are used to approximately solve the finite-horizon optimal H <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">∞</sub> state feedback control problem. The method is based on solving a related Hamilton-Jacobi-Isaacs equation of the corresponding finite-horizon zero-sum game. The neural network approximates the corresponding game value function on a certain domain of the state-space and results in a control computed as the output of a neural network. It is shown that the neural network approximation converges uniformly to the game-value function and the resulting controller provides closed-loop stability and bounded L <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> gain. The result is a nearly exact H <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">∞</sub> feedback controller with time-varying coefficients that is solved a priori offline. The results of this paper are applied to the Rotational/Translational Actuator benchmark nonlinear control problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call