For the nonlinear eigenvalue problem $A(\hat \lambda )\hat x = 0$, where $A( \cdot )$ is a matrix-valued operator, residual inverse iteration with shift $\sigma $ is defined by \[ a^{(l + 1)} : = {\text{const. }}(x^{(l)} - A(\sigma )^{ - 1} A(\lambda _{l + 1} )x^{(l)} ),\] where $\lambda _{l + 1} $ is an appropriate approximation of $\hat \lambda $. In the linear case, $A(\lambda ) = A - \lambda I$, this is theoretically equivalent to ordinary inverse iteration, but the residual formulation results in a considerably higher limit accuracy when the residual $A(\lambda _{l + 1} )x^{(l)} = Ax^{(l)} - \lambda _{l + 1} x^{(l)} $ is accumulated in double precision. In the nonlinear case, if $\sigma $ is sufficiently close to $\hat \lambda $, convergence is at least linear with convergence factor proportional to $| {\sigma - \hat \lambda } |$. As with ordinary inverse iteration, the convergence can be accelerated by using variable shifts.
Read full abstract