Conjugate residual method

Conjugate residual method

The conjugate residual method is an iterative numeric method used for solving systems of linear equations. It's a Krylov subspace method very similar to the much more popular conjugate gradient method, with similar construction and convergence properties.

This method is used to solve linear equations of the form

\mathbf A \mathbf x = \mathbf b

where A is an invertible and Hermitian matrix, and b is nonzero.

The conjugate residual method differs from the closely related conjugate gradient method primarily in that it involves somewhat more computation but is applicable to problems that aren't positive definite; in fact the only requirement (besides the obvious invertible A and nonzero b) is that A be Hermitian (or, with real numbers, symmetric). This makes the conjugate residual method applicable to problems which intuitively require finding saddle points instead of minima, such as numeric optimization with Lagrange multiplier constraints.

Given an (arbitrary) initial estimate of the solution \mathbf x_0, the method is outlined below:

\mathbf x_0 = \text{Some initial guess} \,
\mathbf r_0 = \mathbf b - \mathbf{A x}_0 \,
\mathbf p_0 = \mathbf r_0 \,
Iterate, with k starting at 0:
\alpha_k = \frac{\mathbf{r}_k^\mathrm{T} \mathbf{A r}_k}{(\mathbf{A p}_k)^\mathrm{T} \mathbf{A p}_k}  \,
\mathbf{x}_{k+1} = \mathbf{x}_k + \alpha_k \mathbf{p}_k \,
\mathbf{r}_{k+1} = \mathbf{r}_k - \alpha_k \mathbf{A p}_k \,
\beta_k = \frac{\mathbf{r}_{k+1}^\mathrm{T} \mathbf{A r}_{k+1}}{\mathbf{r}_k^\mathrm{T} \mathbf{A r}_k}  \,
\mathbf{p}_{k+1} = \mathbf{r}_{k+1} + \beta_k \mathbf{p}_k \,
\mathbf{A p}_{k + 1} = \mathbf{A r}_{k+1} + \beta_k \mathbf{A p}_k \,
k := k + 1 \,

the iteration may be stopped once \mathbf x_k has been deemed converged. Note that the only difference between this and the conjugate gradient method is the calculation of αk and βk (plus the optional recursive calculation of \mathbf{A p}_k at the end).

Preconditioning

By making a few substitutions and variable changes, a preconditioned conjugate residual method may be derived in the same way as done for the conjugate gradient method:

\mathbf x_0 = \text{Some initial guess} \,
\mathbf r_0 = \mathbf M^{-1}(\mathbf b - \mathbf{A x}_0) \,
\mathbf p_0 = \mathbf r_0 \,
Iterate, with k starting at 0:
\alpha_k = \frac{\mathbf r_k^\mathrm{T} \mathbf A \mathbf r_k}{(\mathbf{A p}_k)^\mathrm{T} \mathbf M^{-1} \mathbf{A p}_k}  \,
\mathbf x_{k+1} = \mathbf x_k + \alpha_k \mathbf{p}_k \,
\mathbf r_{k+1} = \mathbf r_k - \alpha_k \mathbf M^{-1} \mathbf{A p}_k \,
\beta_k = \frac{\mathbf r_{k + 1}^\mathrm{T} \mathbf A \mathbf r_{k + 1}}{\mathbf r_k^\mathrm{T} \mathbf A \mathbf r_k} \,
\mathbf p_{k+1} = \mathbf r_{k+1} + \beta_k \mathbf{p}_k \,
\mathbf{A p}_{k + 1} = \mathbf A \mathbf r_{k+1} + \beta_k \mathbf{A p}_k \,
k := k + 1 \,

The preconditioner \mathbf M^{-1} must be symmetric. Note that the residual vector here is different from the residual vector without preconditioning.

References

  • Yousef Saad, Iterative methods for sparse linear systems (2nd ed.), pages 181–182, SIAM. ISBN 978-0898715347.
  • Jonathan Richard Shewchuck, An Introduction to the Conjugate Gradient Method Without the Agonizing Pain (edition 1 \tfrac 1 4), pages 39–40.

Wikimedia Foundation. 2010.

Игры ⚽ Нужно решить контрольную?

Look at other dictionaries:

  • Conjugate gradient method — A comparison of the convergence of gradient descent with optimal step size (in green) and conjugate vector (in red) for minimizing a quadratic function associated with a given linear system. Conjugate gradient, assuming exact arithmetic,… …   Wikipedia

  • Derivation of the conjugate gradient method — In numerical linear algebra, the conjugate gradient method is an iterative method for numerically solving the linear system where is symmetric positive definite. The conjugate gradient method can be derived from several different perspectives,… …   Wikipedia

  • Residual (numerical analysis) — Loosely speaking, a residual is the error in a result. To be precise, suppose we want to find x such that : f(x)=b.,Given an approximation of x 0 of x , the residual is : b f(x 0),whereas the error is: x 0 x.,If we do not know x , we cannot… …   Wikipedia

  • Newton's method in optimization — A comparison of gradient descent (green) and Newton s method (red) for minimizing a function (with small step sizes). Newton s method uses curvature information to take a more direct route. In mathematics, Newton s method is an iterative method… …   Wikipedia

  • Iterative method — In computational mathematics, an iterative method is a mathematical procedure that generates a sequence of improving approximate solutions for a class of problems. A specific implementation of an iterative method, including the termination… …   Wikipedia

  • List of numerical analysis topics — This is a list of numerical analysis topics, by Wikipedia page. Contents 1 General 2 Error 3 Elementary and special functions 4 Numerical linear algebra …   Wikipedia

  • Gauss–Newton algorithm — The Gauss–Newton algorithm is a method used to solve non linear least squares problems. It can be seen as a modification of Newton s method for finding a minimum of a function. Unlike Newton s method, the Gauss–Newton algorithm can only be used… …   Wikipedia

  • List of mathematics articles (P) — NOTOC P P = NP problem P adic analysis P adic number P adic order P compact group P group P² irreducible P Laplacian P matrix P rep P value P vector P y method Pacific Journal of Mathematics Package merge algorithm Packed storage matrix Packing… …   Wikipedia

  • Chebyshev iteration — In numerical linear algebra, the Chebyshev iteration is an iterative method for determining the solutions of a system of linear equations. The method is named after Russian mathematician Pafnuty Chebyshev. Chebyshev iteration avoids the… …   Wikipedia

  • Pidgin code — In computer programming, pidgin code is a mixture of several programming languages in the same program, or pseudocode that is a mixture of a programming language with natural language descriptions. Hence the name: the mixture is a programming… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”