Abstract

In this paper, a data-based neural policy learning method is established to solve the robust tracking control problem of a class of continuous-time systems which have two kinds of uncertainties at the same time. First, the robust trajectory tracking is achieved by controlling the tracking error to zero. The specific implementation strategy is to construct an augmented system including the tracking error and then transform the robust tracking control problem into an optimal control problem by selecting a suitable cost function. Then, a neural network identifier is built to reconstruct the unknown dynamics and a policy iteration algorithm is adopted by using a critic neural network. In this way, the Hamilton–Jacobi–Bellman equation can be solved. Through this learning algorithm, the approximate optimal control policy is obtained and the solution of the robust tracking control problem can be derived. Finally, two simulation examples are proposed to verify the effectiveness of the developed method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.