Abstract

A new algorithm is derived for computing the action $f(tA)B$, where $A$ is an $n\times n$ matrix, $B$ is $n\times n_0$ with $n_0 \ll n$, and $f$ is cosine, sinc, sine, hyperbolic cosine, hyperbolic sinc, or hyperbolic sine function. In the case of $f$ being even, the computation of $f(tA^{1/2})B$ is possible without explicitly computing $A^{1/2}$, where $A^{1/2}$ denotes any matrix square root of $A$. The algorithm offers six independent output options given $t$, $A$, $B$, and a tolerance. For each option, actions of a pair of trigonometric or hyperbolic matrix functions are simultaneously computed. The algorithm scales the matrix $A$ down by a positive integer $s$, approximates $f(s^{-1}tA^\sigma)B$, where $\sigma$ is either 1 or 1/2, by using a truncated Taylor series, and finally uses the recurrences of the Chebyshev polynomials of the first and second kind to recover $f(tA^\sigma)B$. The selection of the scaling parameter and the degree of Taylor polynomial is based on a forward error analysis and a sequence of the form $\|A^k\|^{1/k}$ in such a way that the overall computational cost of the algorithm is minimized. Shifting is used where applicable as a preprocessing step to reduce the scaling parameter. The algorithm works for any matrix $A$, and its computational cost is dominated by the formation of products of $A$ with $n\times n_0$ matrices that could take advantage of the implementation of level-3 BLAS. Our numerical experiments show that the new algorithm behaves in a forward stable fashion and in most problems outperforms the existing algorithms in terms of CPU time, computational cost, and accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call