- Fixed point iteration
In

numerical analysis ,**fixed point iteration**is a method of computing fixed points ofiterated function s.More specifically, given a function $f$ defined on the

real number s with real values and given a point $x\_0$ in the domain of $f$, the fixed pointiteration is:$x\_\{n+1\}=f(x\_n),\; ,\; n=0,\; 1,\; 2,\; dots$

which gives rise to the

sequence $x\_0,\; x\_1,\; x\_2,\; dots$ which is hoped to converge to a point $x$. If $f$ is continuous, then one can prove that the obtained $x$ is a fixed point of $f$, i.e.,:$f(x)=x$.

More generally, the function "f" can be defined on any

metric space with values in that same space.**Examples*** A first simple and useful example is the

Babylonian method for computing thesquare root of "a">0, which consists in taking $f(x)=frac\; 12left(frac\; ax\; +\; x\; ight)$, i.e. the mean value of "x" and "a/x", to approach the limit $x\; =\; sqrt\; a$ (from whatever starting point $x\_0\; gg\; 0$). This is a special case ofNewton's method quoted below.* The fixed point iteration $x\_\{n+1\}=cos\; x\_n,$ converges to the unique fixed point of the function $f(x)=cos\; x,$ for any starting point $x\_0.$ This example does satisfy the hypotheses of the

Banach fixed point theorem . Hence, the error after n steps satisfies $|x\_n-x\_0|\; leq\; \{\; q^n\; over\; 1-q\; \}\; |\; x\_1\; -\; x\_0\; |\; =\; C\; q^n$ (where we can take $q\; =\; 0.85$, if we start from $x\_0=1$.) When the error is less than a multiple of $q^n$ for some constant "q", we say that we have "linear convergence". The Banach fixed point theorem allows one to obtain fixed point iterations with linear convergence.* The fixed point iteration $x\_\{n+1\}=2x\_n,$ will diverge unless $x\_0=0$. We say that the fixed point of $f(x)=2x,$ is repelling.

* The requirement that "f" is continuous is important, as the following example shows. The iteration :$x\_\{n+1\}\; =\; egin\{cases\}frac\{x\_n\}\{2\},\; x\_n\; e\; 0\backslash 1,\; x\_n=0end\{cases\}$converges to 0 for all values of $x\_0$.However, 0 is "not" a fixed point of the function:$f(x)\; =\; egin\{cases\}frac\{x\}\{2\},\; x\; e\; 0\backslash 1,\; x\; =\; 0end\{cases\}$this function is "not" continuous at $x=0$, and in fact has no fixed points.

**Applications***

Newton's method for a given differentiable function $f(x)$ is $x\_\{n+1\}=x\_n-frac\{f(x\_n)\}\{f\text{'}(x\_n)\}$. If we write $g(x)=x-frac\{f(x)\}\{f\text{'}(x)\}$ we may rewrite the Newton iteration as the fixed point iteration $x\_\{n+1\}=g(x\_n)$. If this iteration converges to a fixed point $x$ of $g$ then $x=g(x)=x-frac\{f(x)\}\{f\text{'}(x)\}$ so $frac\{f(x)\}\{f\text{'}(x)\}=0$. The inverse of anything is nonzero, therefore $f(x)=0$: x is a "root" of f. Assuming the hypotheses of theBanach fixed point theorem are satisfied, we have that the Newton iteration converges linearly. However, a more detailed analysis shows that under certain circumstances, $|x\_n-x|^\{n^2\}\; math>.\; This\; is\; called\; "quadratic\; convergence".$*

Halley's method is similar toNewton's method but, when it works correctly, its error is $|x\_n-x|^\{n^3\}\; math>\; (cubic\; convergence).\; In\; general,\; it\; is\; possible\; to\; design\; methods\; that\; converge\; with\; speed$ Cq^\{n^k\}$for\; any$ kin\; Bbb\; N$.\; As\; a\; general\; rule,\; the\; higher\; the$ k$,\; the\; less\; stable\; it\; is,\; and\; the\; more\; computationally\; expensive\; it\; gets.\; For\; these\; reasons,\; higher\; order\; methods\; are\; typically\; not\; used.$*

Runge-Kutta method s and numericalOrdinary Differential Equation solvers in general can be viewed as fixed point iterations. Indeed, the core idea when analyzing theA-stability of ODE solvers is to start with the special case $y\text{'}=ay$, where a is acomplex number , and to check whether the ODE solver converges to the fixed point $y=0$ whenever the real part of a is negative. [*One may also consider certain iterations A-stable if the iterates stay bounded for a long time, which is beyond the scope of this article.*]* The

Picard–Lindelöf theorem , which shows that ordinary differential equations have solutions, is essentially an application of theBanach fixed point theorem to a special sequence of functions which forms a fixed point iteration.**Properties**If a function $f$ defined on the real line with real values is Lipschitz continuous with Lipschitz constant $L<1$, then this function has precisely one fixed point, and the fixed point iteration converges towards that fixed point for any initial guess $x\_0.$ This theorem can be generalized to any metric space.

The speed of convergence of the iteration sequence can be increased by using a

convergence acceleration method such asAitken's delta-squared process . The application of Aitken's method to fixed point iteration is known asSteffensen's method , and it can be shown that Steffensen's method yields a rate of convergence that is at least quadratic.**Notes****ee also***

Banach fixed point theorem

*Wikimedia Foundation.
2010.*