Abstract

Abstract In this paper, a novel integral reinforcement learning approach is developed based on value iteration (VI) for designing the H∞ controller of continuous-time (CT) nonlinear systems. First, the VI learning mechanism is introduced to solve the zero-sum game problems, which is equivalent to the Hamilton–Jacobi–Isaacs (HJI) equation arising in H∞ control problems. Since the proposed method is based on VI learning mechanism, it does not require the admissible control for the implementation, and thus satisfies a more general initial condition than the works based on policy iteration (PI). The iterative property of the value function is analysed with an arbitrary initial positive function, and the H∞ controller can be derived as the iteration converges. For the implementation of the proposed method, three neural networks are introduced to approximate the iterative value function, the iterative control policy and the iterative disturbance policy, respectively. To verify the effectiveness of the VI based method, a linear case and a nonlinear case are presented, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call