Abstract

Computing more than one eigenvalue for (large sparse) one-parameter polynomial and general nonlinear eigenproblems, as well as for multiparameter linear and nonlinear eigenproblems, is a much harder task than for standard eigenvalue problems. We present simple but efficient selection methods based on divided differences to do this. Selection means that the approximate eigenpair is picked from candidate pairs that satisfy a certain suitable criterion. The goal of this procedure is to steer the process away from already detected pairs. In contrast to locking techniques, it is not necessary to keep converged eigenvectors in the search space, so that the entire search space may be devoted to new information. The selection techniques are applicable to many types of matrix eigenvalue problems; standard deflation is feasible only for linear one-parameter problems. The methods are easy to understand and implement. Although the use of divided differences is well known in the context of nonlinear eigenproblems, the proposed selection techniques are new for one-parameter problems. For multiparameter problems, we improve on and generalize our previous work. We also show how to use divided differences in the framework of homogeneous coordinates, which may be appropriate for generalized eigenvalue problems with infinite eigenvalues. While the approaches are valuable alternatives for one-parameter nonlinear eigenproblems, they seem the only option for multiparameter problems.

Highlights

  • In large sparse matrix eigenvalue problems, a common task is to compute a few eigenvalues closest to a given target, largest in magnitude, or rightmost in the complex plane

  • We present several selection methods to compute more than one eigenpair of linear and nonlinear, and one-parameter and multiparameter eigenvalue problems (MEPs)

  • For problems that are not truly large-scale, we can solve the system in step 2 in Algorithm 1 with a direct instead of an iterative method. This holds for applications of the selection techniques for multiparameter eigenvalue problems, where the problem size is often relatively small: here, p vectors of length n are sought while the total problem size is of size np, where p is the number of parameters

Read more

Summary

Introduction

In large sparse matrix eigenvalue problems, a common task is to compute a few eigenvalues closest to a given target, largest in magnitude, or rightmost in the complex plane. We present several selection methods to compute more than one eigenpair of linear and nonlinear, and one-parameter and multiparameter eigenvalue problems (MEPs). To the best of our knowledge, the use of selection techniques to compute several eigenvalues in the form as described in this paper is new for one-parameter nonlinear eigenproblems: the QEP (4), PEP (3), and general NEP (2). These problems may have infinite eigenvalues when the leading coefficient matrix is singular.

16 Page 4 of 25
Selection for nonlinear one‐parameter eigenvalue problems
16 Page 8 of 25
Selection for polynomial eigenproblems in homogeneous coordinates
16 Page 12 of 25
Multiparameter eigenvalue problems
Comparison with other approaches
Numerical examples
16 Page 20 of 25
16 Page 22 of 25
Conclusions
16 Page 24 of 25
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call