This paper presents a fixed-time optimal control design approach using reinforcement learning (RL) that guarantees not only fixed-time convergence of the learning algorithm to an optimal controller but also fixed-time stability of the learned control solution. To ensure the former, zero-finding capabilities of the zeroing neural networks (ZNNs) are leveraged, and novel adaptive laws are presented accordingly. To ensure the latter, conditions on the cost function are provided for which its corresponding optimal controller assures the fixed-time stability of the closed-loop system. It is also shown that imposing a fixed-time stability constraint on the infinite-horizon optimal control solution actually solves the classical fixed-final-time (FFTM) finite-horizon optimal control problem. The Hamilton–Jacobi–Bellman (HJB) equation for the FFTM optimal control problem is time-varying, which makes it hard or even impossible to learn it directly online using RL. The presented approach bypasses this difficulty by developing an online solution for infinite-horizon optimal control problems under fixed-time stability constraints and with fixed-time convergent tuning laws. This approach makes learning and closed-loop system settling times predictable, tunable, and bounded. Simulation results for fixed-time optimal adaptive stabilization of a torsional pendulum system clarify this new design for nonlinear optimal control theory.
Read full abstract