Abstract
For the nonlinear eigenvalue problem $A(\hat \lambda )\hat x = 0$, where $A( \cdot )$ is a matrix-valued operator, residual inverse iteration with shift $\sigma $ is defined by \[ a^{(l + 1)} : = {\text{const. }}(x^{(l)} - A(\sigma )^{ - 1} A(\lambda _{l + 1} )x^{(l)} ),\] where $\lambda _{l + 1} $ is an appropriate approximation of $\hat \lambda $. In the linear case, $A(\lambda ) = A - \lambda I$, this is theoretically equivalent to ordinary inverse iteration, but the residual formulation results in a considerably higher limit accuracy when the residual $A(\lambda _{l + 1} )x^{(l)} = Ax^{(l)} - \lambda _{l + 1} x^{(l)} $ is accumulated in double precision. In the nonlinear case, if $\sigma $ is sufficiently close to $\hat \lambda $, convergence is at least linear with convergence factor proportional to $| {\sigma - \hat \lambda } |$. As with ordinary inverse iteration, the convergence can be accelerated by using variable shifts.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.