Abstract

The Extrapolation Algorithm is a technique devised in 1962 for accelerating the rate of convergence of slowly converging Picard iterations for fixed point problems. Versions to this technique are now called Anderson Acceleration in the applied mathematics community and Anderson Mixing in the physics and chemistry communities, and these are related to several other methods extant in the literature. We seek here to broaden and deepen the conceptual foundations for these methods, and to clarify their relationship to certain iterative methods for root-finding problems. For this purpose, the Extrapolation Algorithm will be reviewed in some detail, and selected papers from the existing literature will be discussed, both from conceptual and implementation perspectives.

Highlights

  • We simplify the notation to A, b and c, and set m = min(, M ) and n = N

  • In 1962, during the course of my doctoral dissertation research, I devised a technique for accelerating the convergence of the Picard iteration associated with a fixed point problem, which I called the Extrapolation Algorithm

  • We have (x − y)∗y = x∗y − y∗y, so we find that −v∗y

Read more

Summary

Published Version Citable link Terms of Use

Comments on "Anderson Acceleration, Mixing and Extrapolation." Numerical Algorithms 80 (1): 135-234. In 1962, during the course of my doctoral dissertation research, I devised a technique for accelerating the convergence of the Picard iteration associated with a fixed point problem, which I called the Extrapolation Algorithm. The Extrapolation Algorithm For g : RN → RN , consider the problem of finding a fixed point x ∈ RN such that g(x) = x. In outline form, the basic Extrapolation Algorithm proceeds as follows: Choose the maximal m( ) such that there are well-determined θk( ) , 0 ≤ k ≤ m( ), minimizing v( ) − u( ) , with 0 ≤ m( ) ≤ min( , M ) N , and satisfying θ0( ) > 0. There will always be a unique v( ) − u( ) in the affine subspace with minimal norm — closest to 0

We see that the hypothesis or verification that
It is easily shown that linear independence of
Since the linear span of
Background
Hk to calculate
We see that so
Hk or
Scaling and Pivoting
If δ
Choice of M
We can regard
It is easily shown that
Ax min
We may therefore write
We then obtain
We now observe that thence We also have n max k
We also find that so and
It is easily verified that
Givens Matrices
We see that m
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.