Preconditioner

Preconditioner

In linear algebra and numerical analysis, a preconditioner "P" of a matrix "A" is a matrix such that "P"−1"A" has a smaller condition number than "A".Preconditioners are useful when using an iterative method to solve a large, sparse linear system

: Ax = b,

for "x" since the rate of convergence for most iterative linear solvers degrades as the condition number of a matrix increases. Instead of solving the original linear system above, one may solve either the left preconditioned system

: P^{-1}Ax = P^{-1}b,,

via the two solves

:c=P^{-1}b, qquad (P^{-1}A)x=c,,

or the right preconditioned system

: AP^{-1}Px = b,,

via the two solves

:(AP^{-1})y=b, qquad x=P^{-1}y,,

which are both equivalent to solving the original system so long as the preconditioner matrix "P" is nonsingular.

The goal of this preconditioned system is to reduce the condition number of the left or right preconditioned system matrix

:P^{-1}A,,

or

:AP^{-1},,

respectively.

Typically there is a trade-off in the choice of "P". Since the operator "P""-1" must be applied at each step of the iterative linear solver, it should be very efficient in order to reduce the cost (computing time) of applying the "P""-1" operation. The most efficient preconditioner would therefore be

: P=I, quad ext{since then}quad P^{-1}=I;,

unfortunately this results in the original linear system and the preconditioner does nothing. At the other limit, if one could choose

: P=A, quad ext{since then}quad P^{-1}A = AP^{-1} = I,,

which has optimal condition number of 1, requiring a single iteration for convergence; however in this case

: P^{-1}=A^{-1},,

and the preconditioner solve is as difficult as solving the original system. One therefore chooses the matrix "P" as somewhere between these two extremes, in an attempt to achieve a minimal number of linear iterations while keeping the operator "P""-1" as simple as possible. Some examples of typical preconditioning approaches are detailed below.

We note that in the above discussion, the preconditioned matrix

: P^{-1}A,quad ext{resp.}quad AP^{-1},

is almost never explicitly formed. In using an iterative method, only the action of applying the preconditioner solve operation "P""-1" to a given vector need be computed.

It may also be noted that for symmetric matrices "A", the desired effect in a applying a preconditioner is to make the quadratic form of the preconditioned operator P^{-1}A to be nearly spherical [http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf]

Examples

Two popular preconditioning strategies are based on splitting "A" into the three components

: A = L + D + U,

where "L" is the lower triangular part of "A", "D" is the diagonal part of "A", and "U" is the upper triangular part of "A".

Jacobi preconditioner

The Jacobi preconditioner is one of the simplest forms of preconditioning, in which the preconditioner is chosen to be the diagonal of the matrix only,: P = D.,That is,

:P_{ij} = A_{ii}delta_{ij} = egin{cases}A_{ii} & i=j \ 0 &mbox{otherwise}end{cases}

and so

:P^{-1}_{ij} = frac{delta_{ij{A_{ii.,

OR

The Successive Over Relaxation (SOR) preconditioner takes "P" to be

:P=left(frac{D}{omega}+L ight)frac{omega}{2-omega}D^{-1}left(frac{D}{omega}+U ight)

where ω is a "relaxation parameter" between 0 and 2, exclusive.

The version for symmetric matrices "A", in which

: U=L^T,,

is referred to as Symmetric Successive Over Relaxation, or (SSOR), in which

:P=left(frac{D}{omega}+L ight)frac{omega}{2-omega}D^{-1}left(frac{D}{omega}+L^T ight).

The SOR and SSOR methods are credited to David M. Young, Jr..

See also

* Other preconditioners:
** Incomplete Cholesky factorization
** Incomplete LU factorization
* Krylov subspace
* Iterative methods
* Conjugate gradient method
* Preconditioned conjugate gradient method

External links

* [http://www.math-linux.com/spip.php?article55 Preconditioned Conjugate Gradient] – math-linux.com
* [http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf An Introduction to the Conjugate Gradient Method Without the Agonizing Pain] by Jonathan Richard Shewchuk.
* [http://www.preconditioners.com www.preconditioners.com]


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • preconditioner — noun A matrix that has a smaller condition number than another …   Wiktionary

  • Conjugate gradient method — A comparison of the convergence of gradient descent with optimal step size (in green) and conjugate vector (in red) for minimizing a quadratic function associated with a given linear system. Conjugate gradient, assuming exact arithmetic,… …   Wikipedia

  • BDDC — In numerical analysis, BDDC (balancing domain decomposition by constraints) is a domain decomposition method for solving large symmetric, positive definite systems of linear equations that arize from the finite element method. BDDC is used as a… …   Wikipedia

  • List of numerical analysis topics — This is a list of numerical analysis topics, by Wikipedia page. Contents 1 General 2 Error 3 Elementary and special functions 4 Numerical linear algebra …   Wikipedia

  • Iterative Template Library — The Iterative Template Library (ITL) is a generic component library that provides iterative methods for solving linear systems. ITL also provides numerous preconditioners which is for MTL. The ITL was written at the Open Systems Lab of Indiana… …   Wikipedia

  • Incomplete Cholesky factorization — In numerical analysis, a field within mathematics, an incomplete Cholesky factorization of a symmetric positive definite matrix is a sparse approximation of the Cholesky factorization. Incomplete Cholesky factorization are often used as a… …   Wikipedia

  • Preconditioned conjugate gradient method — The conjugate gradient method is a numerical algorithm that solves a system of linear equations:A x= b.,where A is symmetric [positive definite] . If the matrix A is ill conditioned, i.e. it has a large condition number kappa(A), it is often… …   Wikipedia

  • Henk van der Vorst — Hendrik Albertus (Henk) van der Vorst (born on May 5, 1944) is a Dutch mathematician, Emeritus Professor of Numerical Analysis at Utrecht University. According to the Institute for Scientific Information (ISI), his paperCitation author = H.A. van …   Wikipedia

  • Invertible matrix — In linear algebra an n by n (square) matrix A is called invertible (some authors use nonsingular or nondegenerate) if there exists an n by n matrix B such that where In denotes the n by n identity matrix and the multiplication used is ordinary… …   Wikipedia

  • Textured vegetable protein — Textured or texturized vegetable protein (TVP), also known as textured soy protein (TSP) is a meat substitute made from defatted soy flour, a by product of making soybean oil. It is quick to cook, high in protein, and low in fat.It is not to be… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”