Abstract

In this paper, we propose a computational approach to solve a model-based optimal control problem. Our aim is to obtain the optimal solution of the nonlinear optimal control problem. Since the structures of both problems are different, only solving the model-based optimal control problem will not give the optimal solution of the nonlinear optimal control problem. In our approach, the adjusted parameters are added into the model used so as the differences between the real plant and the model can be measured. On this basis, an expanded optimal control problem is introduced, where system optimization and parameter estimation are integrated interactively. The Hamiltonian function, which adjoins the cost function, the state equation and the additional constraints, is defined. By applying the calculus of variation, a set of the necessary optimality conditions, which defines modified model-based optimal control problem, parameter estimation problem and computation of modifiers, is then derived. To obtain the optimal solution, the modified model-based optimal control problem is converted in a nonlinear programming problem through the canonical formulation, where the gradient formulation can be made. During the iterative procedure, the control sequences are generated as the admissible control law of the model used, together with the corresponding state sequences. Consequently, the optimal solution is updated repeatedly by the adjusted parameters. At the end of iteration, the converged solution approaches to the correct optimal solution of the original optimal control problem in spite of model-reality differences. For illustration, two examples are studied and the results show the efficiency of the approach proposed.

Highlights

  • Linear quadratic regulator (LQR) problem is a standard optimal control problem, where the cost functional is in quadratic criterion and the state dynamics is in a linear form

  • At the end of iteration, the iterative solution could converge to the correct optimal solution of the original optimal control problem, in spite of model-reality differences [11], [12], [1]

  • Because of the complexity of the original optimal control problem, a simplified model-based optimal control problem was proposed to be solved iteratively such that the true optimal solution of the original optimal control problem could be obtained

Read more

Summary

Introduction

Linear quadratic regulator (LQR) problem is a standard optimal control problem, where the cost functional is in quadratic criterion and the state dynamics is in a linear form. The nonlinear state dynamics is always linearized before a decision control policy is determined to minimize the cost function In this point of view, the adjustable parameters are introduced in the LQR model such that the differences between the real plant and the model used can be measured repeatedly. The differences between the real plant and the model used are measured by the adjusted parameters It follows that the value of the control sequences is updated through the gradient algorithm, where the mathematical optimization technique is applicable. Notice that solving Problem (M) iteratively would give the true optimal solution of Problem (P) This could be done because of the adjustable parameters that introduced into the model are able to measure the differences between the real plant and the model used repeatedly. The computation of the gradient of the cost functional J3(u) is stated in the following algorithm

Gradient algorithm
Findings
The iterative computation procedure
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call