Abstract

SummaryBlock Krylov subspace methods (KSMs) comprise building blocks in many state‐of‐the‐art solvers for large‐scale matrix equations as they arise, for example, from the discretization of partial differential equations. While extended and rational block Krylov subspace methods provide a major reduction in iteration counts over polynomial block KSMs, they also require reliable solvers for the coefficient matrices, and these solvers are often iterative methods themselves. It is not hard to devise scenarios in which the available memory, and consequently the dimension of the Krylov subspace, is limited. In such scenarios for linear systems and eigenvalue problems, restarting is a well‐explored technique for mitigating memory constraints. In this work, such restarting techniques are applied to polynomial KSMs for matrix equations with a compression step to control the growing rank of the residual. An error analysis is also performed, leading to heuristics for dynamically adjusting the basis size in each restart cycle. A panel of numerical experiments demonstrates the effectiveness of the new method with respect to extended block KSMs.

Highlights

  • This work is concerned with numerical methods for solving large-scale Sylvester matrix equations of the form (1.1)AX + XB + CD∗ = 0, with coefficient matrices A, B ∈ Cn×n and C, D ∈ Cn×s

  • We set the memory buffer of our compress-and-restart routine equal to the memory consumption of extended Krylov subspace method (EKSM); i.e., we set memmax in Algorithm 2.2 equal to 264. With this setting, Restarted Sylv needs 2 restarts for a total of 85 iterations to converge, and it computes an approximate solution of rank 57, equal to the rank of the solution returned by all the EKSM variants

  • Much work has been devoted to adapting iterative methods for large and sparse linear systems to these architectures, but straightforward extensions of successful strategies for linear systems to matrix equations are not always feasible

Read more

Summary

Introduction

This work is concerned with numerical methods for solving large-scale Sylvester matrix equations of the form (1.1). Only the projected equation is constructed and solved; in the second pass, the method computes the product of the Krylov subspace bases with the low-rank factors of the projected solution. An evident shortcoming of the described procedure is that every time we restart, the rank of the residual may double, leading to increasingly large Krylov bases that will inevitably exceed memory capacity. This issue can be mitigated by compressing the residual factors before constructing the Krylov subspace in each cycle. 7: Compute (incrementally) the Arnoldi relation for Kj (B, D(k)) and store [V1| · · · |Vj+1], Gj(k) , and

15: Compute
Conclusions
Methods
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call