Abstract
We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worst-case analysis shows that the number of iterations grows as the square root of the problem size, but in practice it appears to grow more slowly. As in other interior-point methods the overall computational effort is therefore dominated by the least-squares system that taust be solved in each iteration. A type of conjugate-gradient algorithm can be used for this purpose, which results in important savings for two reasons. First, it allows us to take advantage of the special structure the problems often have (e.g., Lyapunov or algebraic Riccati inequalities). Second, we show that the polynomial bound on the number ofiterations remains valid even if the conjugate-gradient algorithm is not run until completion, which in practice can greatly reduce the computational effort per iteration. We describe in detail how the algorithm works for optimization problems with L Lyapunov inequaliues, each of size m. We prove an overall worst-case operation count of O(mSSLl5). The average-case complexity appears to be closer to O(m4Ll5). This estimate is justified by extensive numerical experimentation, and is consistent with other researchers' experience with the practical performance of interior-point algorithms for linear programming. This result means that the computational cost of extending current control theory based on the solution of Lyapunov or Riccati equations to a theory that is based on the solution of (multiple, coupled) Lyapunov or Riccati inequalities is modest.
Paper version not known (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have