Abstract

The most popular method for computing the matrix logarithm is a combination of the inverse scaling and squaring method in conjunction with a Padé approximation, sometimes accompanied by the Schur decomposition. In this work, we present a Taylor series algorithm, based on the free-transformation approach of the inverse scaling and squaring technique, that uses recent matrix polynomial formulas for evaluating the Taylor approximation of the matrix logarithm more efficiently than the Paterson–Stockmeyer method. Two MATLAB implementations of this algorithm, related to relative forward or backward error analysis, were developed and compared with different state-of-the art MATLAB functions. Numerical tests showed that the new implementations are generally more accurate than the previously available codes, with an intermediate execution time among all the codes in comparison.

Highlights

  • Introduction and NotationThe calculus of matrix functions has been, for a long period of time, an area of interest in applied mathematics due to its multiple applications in many branches of science and engineering; see [1] and the references therein

  • There are infinite solutions of Equation (1), but we only focus on the principal matrix logarithm or the standard branch of the logarithm, denoted by log (A), which is the unique logarithm of matrix A whose eigenvalues all lie in the strip {z ∈ C; −π < Im(z) < π}

  • This principal matrix logarithm is the most used in applications in many fields of research from pure science to engineering [2], such as quantum chemistry and mechanics [3,4], buckling simulation [5], biomolecular dynamics [6], machine learning [7,8,9,10], graph theory [11,12], the study of Markov chains [13], sociology [14], optics [15], mechanics [16], computer graphics [17], control theory [18], computer-aided design (CAD) [19], optimization [20], the study of viscoelastic fluids [21,22], the analysis of the topological distances between networks [23], the study of brain–machine interfaces [24], and in statistics and data analysis [25], among other areas

Read more

Summary

Introduction and Notation

The calculus of matrix functions has been, for a long period of time, an area of interest in applied mathematics due to its multiple applications in many branches of science and engineering; see [1] and the references therein. This paper is organized as follows: Section 2 describes an inverse scaling and squaring Taylor algorithm based on efficient evaluation formulas [41] to approximate the matrix logarithm, including an error analysis. Sastre approximations and evaluation formulas based on (4) are denoted as SmT (A), where the superscript stands for the type of polynomial approximation used, in this case the Taylor polynomial (4), and the subindex m represents the maximum degree of the polynomial for the corresponding polynomial approximation. In order to check if the rounded coefficient solutions are sufficiently accurate to evaluate (6), we follow the stability check of [41] (Example 3.1) It consists of substituting the rounded coefficients into the original system of Equations (7)–(15) and testing if the relative error given by them with respect to each coefficient bi, for i = 0, 1, . Where −S8T(−A) is evaluated using (6) with the coefficients from Table 1

Evaluation of Higher-Order Taylor-Based Approximations
Numerical Experiments
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.