Abstract

An algorithm is developed for computing the matrix cosine, building on a proposal of Serbin and Blalock. The algorithm scales the matrix by a power of 2 to make the ∞-norm less than or equal to 1, evaluates a Pade approximant, and then uses the double angle formula cos (2A)=2cos (A)2−I to recover the cosine of the original matrix. In addition, argument reduction and balancing is used initially to decrease the norm. We give truncation and rounding error analyses to show that an [8,8] Pade approximant produces the cosine of the scaled matrix correct to machine accuracy in IEEE double precision arithmetic, and we show that this Pade approximant can be more efficiently evaluated than a corresponding Taylor series approximation. We also provide error analysis to bound the propagation of errors in the double angle recurrence. Numerical experiments show that our algorithm is competitive in accuracy with the Schur–Parlett method of Davies and Higham, which is designed for general matrix functions, and it is substantially less expensive than that method for matrices of ∞-norm of order 1. The dominant computational kernels in the algorithm are matrix multiplication and solution of a linear system with multiple right-hand sides, so the algorithm is well suited to modern computer architectures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call