Abstract

SummaryIn this paper, we consider the optimal control of unknown stochastic dynamical system for both the finite‐horizon and infinite‐horizon cases. The objective of this paper is to find an optimal controller to minimize the expected value of a function which depends on the random disturbance. Throughout this paper, it is assumed that the mean vector and covariance matrix of the disturbance distribution is unknown. An uncertainty set in the space of mean vector and the covariance matrix is introduced. For the finite‐horizon case, we derive a closed‐form expression of the unique optimal policy and the opponents policy that generates the worst‐case distribution. For the infinite‐horizon case, we simplify the Riccati equation obtained in the finite‐hozion setting to an algebraic Riccati equation, which can guarantee the existence of the solution of the Riccati equation. It is shown that the resulting optimal policies obtained in these two cases can stabilize the expected value of the system state under the worst‐case distribution. Furthermore, the unknown system matrices can also be explicitly computed using the adaptive dynamic programming technique, which can help compute the optimal control policy by solving the algebraic Riccati equation. Finally, a simulation example is presented to demonstrate the effectiveness of our theoretical results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call