Abstract
In this paper two modified least-squares iterative algorithms are presented for solving the Lyapunov matrix equations. The first algorithm is based on the hierarchical identification principle, which can be viewed as a surrogate of the least-squares iterative algorithm proposed by Ding et al., whose convergence has not been proved until now. The second one is motivated by a new form of fixed point iterative scheme. With the tool of a new matrix norm, the proof of both algorithms’ global convergence is offered. Furthermore, the feasible sets of their convergence factors are analyzed. Finally, a numerical example is presented to illustrate the rationality of theoretical results.
Highlights
Matrix equations are often encountered in control theory [1, 2], system theory [3, 4], and stability analysis [5,6,7]
Based on iterative scheme (2.2)–(2.4), we propose the following modified least-squares iterative algorithm
5 Conclusions In this paper, two modified least-squares iteration algorithms are proposed for solving the Lyapunov matrix equations, whose global convergence is proved
Summary
Matrix equations are often encountered in control theory [1, 2], system theory [3, 4], and stability analysis [5,6,7]. Two conjugate gradient methods are proposed in [7] to solve consistent or inconsistent equation (1.1) Both have finite termination property in the absence of round-off errors and can get least Frobenius norm solution or least-squares solution with the least Frobenius norm of equation (1.1) when they adopt some special kind of initial matrix. Convergence of the least-squares iterative algorithm is not proved in [18]. The authors claimed that convergence of the leastsquares iterative algorithm is very difficult to prove and still requires studying further.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have