 Rate of convergence

In numerical analysis, the speed at which a convergent sequence approaches its limit is called the rate of convergence. Although strictly speaking, a limit does not give information about any finite first part of the sequence, this concept is of practical importance if we deal with a sequence of successive approximations for an iterative method, as then typically fewer iterations are needed to yield a useful approximation if the rate of convergence is higher. This may even make the difference between needing ten or a million iterations.
Similar concepts are used for discretization methods. The solution of the discretized problem converges to the solution of the continuous problem as the grid size goes to zero, and the speed of convergence is one of the factors of the efficiency of the method. However, the terminology in this case is different from the terminology for iterative methods.
Series acceleration is a collection of techniques for improving the rate of convergence of a series. Such acceleration is commonly accomplished with sequence transformations.
Contents
Convergence speed for iterative methods
Basic definition
Suppose that the sequence {x_{k}} converges to the number L.
We say that this sequence converges linearly to L, if there exists a number μ ∈ (0, 1) such that
The number μ is called the rate of convergence.
If the sequences converges, and
 μ = 0, then the sequence is said to converge superlinearly.
 μ = 1, then the sequence is said to converges sublinearly.
If the sequences converges sublinearly and additionally
then it is said the sequence {x_{k}} converges logarithmically to L.
The next definition is used to distinguish superlinear rates of convergence. We say that the sequence converges with order q for q > 1 to L if
In particular, convergence with order
 2 is called quadratic convergence,
 3 is called cubic convergence,
 etc.
This is sometimes called Qlinear convergence, Qquadratic convergence, etc., to distinguish it from the definition below. The Q stands for "quotient," because the definition uses the quotient between two successive terms.
Extended definition
The drawback of the above definitions is that these do not catch some sequences which still converge reasonably fast, but whose "speed" is variable, such as the sequence {b_{k}} below. Therefore, the definition of rate of convergence is sometimes extended as follows.
Under the new definition, the sequence {x_{k}} converges with at least order q if there exists a sequence {ε_{k}} such that
and the sequence {ε_{k}} converges to zero with order q according to the above "simple" definition. To distinguish it from that definition, this is sometimes called Rlinear convergence, Rquadratic convergence, etc. (with the R standing for "root").
Examples
Consider the following sequences:
The sequence {a_{k}} converges linearly to 0 with rate 1/2. More generally, the sequence Cμ^{k} converges linearly with rate μ if μ < 1. The sequence {b_{k}} also converges linearly to 0 with rate 1/2 under the extended definition, but not under the simple definition. The sequence {c_{k}} converges superlinearly. In fact, it is quadratically convergent. Finally, the sequence {d_{k}} converges sublinearly.
Convergence speed for discretization methods
A similar situation exists for discretization methods. Here, the important parameter is not the iteration number k but the number of grid points, here denoted n. In the simplest situation (a uniform onedimensional grid), the number of grid points is inversely proportional to the grid spacing.
In this case, a sequence x_{n} is said to converge to L with order p if there exists a constant C such that
  x_{n} − L  < Cn ^{− p} for all n.
This is written as x_{n}  L = O(n^{p}) using the big O notation.
This is the relevant definition when discussing methods for numerical quadrature or the solution of ordinary differential equations.
Examples
The sequence {d_{k}} with d_{k} = 1 / (k+1) was introduced above. This sequence converges with order 1 according to the convention for discretization methods.
The sequence {a_{k}} with a_{k} = 2^{k}, which was also introduced above, converges with order p for every number p. It is said to converge exponentially using the convention for discretization methods. However, it only converges linearly (that is, with order 1) using the convention for iterative methods.
Acceleration of convergence
Many methods exist to increase the rate of convergence of a given sequence, i.e. to transform a given sequence into one converging faster to the same limit. Such techniques are in general known as "series acceleration". The goal of the transformed sequence is to be much less "expensive" to calculate than the original sequence. One example of series acceleration is Aitken's deltasquared process.
References
The simple definition is used in
 Michelle Schatzman (2002), Numerical analysis: a mathematical introduction, Clarendon Press, Oxford. ISBN 0198502796.
The extended definition is used in
 Kendell A. Atkinson (1988), An introduction to numerical analysis (2nd ed.), John Wiley and Sons. ISBN 0471500232.
 Walter Gautschi (1997), Numerical analysis: an introduction, Birkhäuser, Boston. ISBN 0817638954.
 Endre Süli and David Mayers (2003), An introduction to numerical analysis, Cambridge University Press. ISBN 0521007941.
Logarithmic convergence is used in
 Van Tuyl, Andrew H. (1994). "Acceleration of convergence of a family of logarithmically convergent sequences". Mathematics of Computation 63 (207): 229–246. http://www.ams.org/journals/mcom/199463207/S00255718199412344282/S00255718199412344282.pdf.
The Big O definition is used in
 Richard L. Burden and J. Douglas Faires (2001), Numerical Analysis (7th ed.), Brooks/Cole. ISBN 0534382169
The terms Qlinear and Rlinear are used in; The Big O definition when using Taylor series is used in
 Nocedal, Jorge; Wright, Stephen J. (2006). Numerical Optimization (2nd ed.). Berlin, New York: SpringerVerlag. pp. 619+620. ISBN 9780387303031.
One may also study the following paper for Qlinear and Rlinear:
 Potra, F. A. (1989). "On Qorder and Rorder of convergence". J. Optim. Th. Appl. 63 (3): 415–431. doi:10.1007/BF00939805.
Categories:
Wikimedia Foundation. 2010.