- Successive over-relaxation
Successive over-relaxation (SOR) is a
numerical method used to speed up convergence of theGauss–Seidel method for solving alinear system of equations . A similar method can be used for any slowly converging iterative process. It was devised simultaneously by David M. Young and by H. Frankel in 1950 for the purpose of automatically solving linear systems on digital computers.Over-relaxation methods have been used before the work of Young and Frankel. For instance, the method ofLewis Fry Richardson , and the methods developed by R. V. Southwell. However, these methods were designed for computation by human calculators, and they required some expertise to ensure convergenceto the solution which made them inapplicable for programming on digital computers. These aspects are discussed in the thesis of David M. Young.Formulation
We seek the solution to a set of linear equations :A phi = b. , Write the matrix "A" as "A" = "D" + "L" + "U", where "D", "L" and "U" denote the diagonal, strictly lower triangular, and strictly upper triangular parts of A, respectively.
The successive over-relaxation (SOR) iteration is defined by the recurrence relation:D+omega L) phi^{(k+1)} = (-omega U + (1-omega)D) phi^{(k)} + omega b. qquad (*)Here, φ("k") denotes the "k"th iterate and omega is a relaxation factor. This iteration reduces to the Gauss–Seidel iteration for ω = 1. As with the Gauss–Seidel method, the computation may be done in place, and the iteration is continued until the changes made by an iteration are below some tolerance.
The choice of relaxation factor is not necessarily easy, and depends upon the properties of the coefficient matrix. For symmetric, positive-definite matrices it can be proven that 0
will lead to convergence, but we are generally interested in faster convergence rather than just convergence. As in the Gauss–Seidel method, in order to implement the iteration (∗) it is only necessary to solve a triangular system of linear equations, which is much simpler than solving the original arbitrary system. The iteration formula is:
:phi^{(k+1)}_i = (1-omega)phi^{(k)}_i+frac{omega}{a_{ii left(b_i - sum_{j=1}^{i-1} a_{ij}phi^{(k+1)}_j - sum_{j=i+1}^n a_{ij}phi^{(k)}_j ight),, i=1,2,ldots,n.
Algorithm
Inputs: "A" , "b", ω
Output: φChoose an initial guess phi to the solution
repeat until convergence:for "i" from 1 until "n" do::σ ← 0::for "j" from 1 until "i" − 1 do:::σ ← σ + a"ij" φ"j"::end ("j"-loop)::for "j" from "i" + 1 until "n" do
:::σ ← σ + a"ij" φ"j"::end ("j"-loop)
::phi_i leftarrow (1-omega)phi_i + frac{omega}{a_{ii (b_i - sigma) :end ("i"-loop):check if convergence is reachedend (repeat)Other applications of the method
A similar technique can be used for any iterative method. Values of omega>1 are used to speedup convergence of a slow-converging process, while values of omega<1 are often used to help establish convergence of a diverging iterative process.
There are various methods that adaptively set the relaxation parameter omega based on the observed behavior of the converging process. Usually they help to reach a super-linear convergence for some problems but fail for the others
References
*CFDWiki|name=Successive_over-relaxation_method_-_SOR
*Yousef Saad, " [http://www-users.cs.umn.edu/%7Esaad/books.html Iterative Methods for Sparse Linear Systems] ", 1st edition, PWS, 1996.
* [http://www.netlib.org/utk/papers/templates/node11.html Netlib] 's copy of Jack Dongarra's section of "Templates for the Solution of Linear Systems"
* [http://www.cs.utexas.edu/users/young/ Web page of David M. Young]
* [http://www.cs.utexas.edu/users/young/david_young_thesis.pdf PhD Thesis of David M. Young] (Harvard, 1950)External links
* [http://math.fullerton.edu/mathews/n2003/SORmethodMod.html Module for the SOR Method]
* [http://www.redbrick.dcu.ie/~hyperion/files/Project.pdf The Successive Over–Relaxation (S.O.R.) Algorithm & its Application to Numerical Solutions of Elliptic Partial Differential Equations]
Wikimedia Foundation. 2010.