Aitken’s Method In numerical analysis, Aitken's delta-squared process or Aitken Extrapolation is a series acceleration method, used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926 . Its early form was known to Seki Kōwa (end of 17th century) and was found for rectification of the circle, i.e. the calculation of π. It is most useful for accelerating the convergence of a sequence that is converging linearly.
Aitken’s Algorithm
Aitken's Delta Squared Process Also called Aitken Extrapolation. An A lgorithm which extrapolates the partial sums of a series whose Convergence is approximately geometric and accelerates its rate of Convergence . A simple nonlinear sequence transformation is the Aitken extrapolation or delta-squared method This transformation is commonly used to improve the rate of convergence of a slowly converging sequence; heuristically, it eliminates the largest part of the absolute error.
FORMULA:
Derivation:
Adding and Subtracting and In numerator then group terms
Finally
ACCELERATING CONVERGENCE: Aitken’s ∆ 2 process Used to accelerate linearly convergent sequences, regardless of the method used. this acceleration method is not only applicable to root-finding algorithms.
Aitken’s process is an extrapolation r delta r delta^2 r r k-2 r k-1 r k-1 -r k-2 r k r k -r k-1 (r k -r k-1 )-(r k-1 -r k-2 )
Steffensen’s Method : a modified Aitken’s delta-squared process applied to fixed point iteration. In numerical analysis, Steffensen's method is a root-finding technique similar to Newton's method, named after Johan Frederik Steffensen. Steffensen's method also achieves quadratic convergence, but without using derivatives as Newton's method does.
Derivation using Aitken's delta-squared process I mplemented in the MATLAB can be found using the Aitken's delta-squared process for accelerating convergence of a sequence. This method assumes starting with a linearly convergent sequence and increases the rate of convergence of that sequence. If the signs of agree and is sufficiently close to the desired limit of the sequence , we can assume the following:
Example : Find a root of cos [x] - x * exp[x] = 0 with x = 0.0 Let the linear iterative process be x i+1 = x i + 1/2( cos [x i ]- x i * exp[x i ] ) i = 0, 1, 2 . . .
Aitken extrapolation can greatly accelerate the convergence of a linearly convergent iteration xn+1 = g( xn ) This shows the power of understanding the behaviour of the error in a numerical process. From that understanding, we can often improve the accuracy, thru extrapolation or some other procedure. This is a justification for using mathematical analyses to understand numerical methods. We will see this repeated at later points in the course, and it holds with many different types of problems and numerical methods for their solution. General Comment