Abstract

Generalized partitioned solutions (GPS) of Riccati equations (RE) are presented in terms of forward and backward time differential equations that are theoretically interesting, possibly computationally advantageous, as well as provide interesting interpretations of these resuits, e.g., in terms of generalized partial observability and controllability matrices. The GPS are the natural framework for the effective change of initial conditions, and the transformation of backward RE to forward RE and vice-versa. The GPS are given in terms of families of forward or backward RE, and constitute generalizations to time-varying RE of well-known solution algorithms such as the <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">X - Y</tex> or Chandrasekhar algorithms. Most importantly, based on the GPS, computationally effective algorithms are obtained for the numerical solution of RE. These partitioned numerical algorithms (PNA) have a decomposed or "partitioned" structure, namely, they are given exactly in terms of a set of elemental solutions which are completely decoupled, and as such computable in either a parallel or serial processing mode. Further, the overall solution is given exactly in terms of a simple recursive operation on the elemental solutions. Finally, the PNA for a large class of RE, namely those with periodic or constant matrices, are completely integration-free, other than for a subinterval of the total computation interval, whose length, moreover, can be chosen arbitrarily. Also based on the GPS, a computationally attractive numerical algorithm is obtained for the computation of the steady-state solution of time-invariant RE. This algorithm results from doubling the length of the partitioning interval, and straightforward use of the GPS. The resulting "doubling" PNA is fast and is also essentially, integration-free requiring integration only in an initial subinterval, whose length is arbitrary, and subsequently consisting of simple iterative operations at the end of each time-interval which is twice as long as the interval in the previous iterations, i.e., doubling.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call