- Conjugate residual method
-
The conjugate residual method is an iterative numeric method used for solving systems of linear equations. It's a Krylov subspace method very similar to the much more popular conjugate gradient method, with similar construction and convergence properties.
This method is used to solve linear equations of the form
where A is an invertible and Hermitian matrix, and b is nonzero.
The conjugate residual method differs from the closely related conjugate gradient method primarily in that it involves somewhat more computation but is applicable to problems that aren't positive definite; in fact the only requirement (besides the obvious invertible A and nonzero b) is that A be Hermitian (or, with real numbers, symmetric). This makes the conjugate residual method applicable to problems which intuitively require finding saddle points instead of minima, such as numeric optimization with Lagrange multiplier constraints.
Given an (arbitrary) initial estimate of the solution , the method is outlined below:
- Iterate, with k starting at 0:
the iteration may be stopped once has been deemed converged. Note that the only difference between this and the conjugate gradient method is the calculation of αk and βk (plus the optional recursive calculation of at the end).
Preconditioning
By making a few substitutions and variable changes, a preconditioned conjugate residual method may be derived in the same way as done for the conjugate gradient method:
- Iterate, with k starting at 0:
The preconditioner must be symmetric. Note that the residual vector here is different from the residual vector without preconditioning.
References
- Yousef Saad, Iterative methods for sparse linear systems (2nd ed.), pages 181–182, SIAM. ISBN 978-0898715347.
- Jonathan Richard Shewchuck, An Introduction to the Conjugate Gradient Method Without the Agonizing Pain (edition ), pages 39–40.
Categories: - Iterate, with k starting at 0:
Wikimedia Foundation. 2010.