Successive over-relaxation

Successive over-relaxation

Successive over-relaxation (SOR) is a numerical method used to speed up convergence of the Gauss–Seidel method for solving a linear system of equations. A similar method can be used for any slowly converging iterative process. It was devised simultaneously by David M. Young and by H. Frankel in 1950 for the purpose of automatically solving linear systems on digital computers.Over-relaxation methods have been used before the work of Young and Frankel. For instance, the method of Lewis Fry Richardson, and the methods developed by R. V. Southwell. However, these methods were designed for computation by human calculators, and they required some expertise to ensure convergenceto the solution which made them inapplicable for programming on digital computers. These aspects are discussed in the thesis of David M. Young.

Formulation

We seek the solution to a set of linear equations : A phi = b. , Write the matrix "A" as "A" = "D" + "L" + "U", where "D", "L" and "U" denote the diagonal, strictly lower triangular, and strictly upper triangular parts of A, respectively.

The successive over-relaxation (SOR) iteration is defined by the recurrence relation: (D+omega L) phi^{(k+1)} = (-omega U + (1-omega)D) phi^{(k)} + omega b. qquad (*)Here, φ("k") denotes the "k"th iterate and omega is a relaxation factor. This iteration reduces to the Gauss–Seidel iteration for ω = 1. As with the Gauss–Seidel method, the computation may be done in place, and the iteration is continued until the changes made by an iteration are below some tolerance.

The choice of relaxation factor is not necessarily easy, and depends upon the properties of the coefficient matrix. For symmetric, positive-definite matrices it can be proven that 0 will lead to convergence, but we are generally interested in faster convergence rather than just convergence.

As in the Gauss–Seidel method, in order to implement the iteration (∗) it is only necessary to solve a triangular system of linear equations, which is much simpler than solving the original arbitrary system. The iteration formula is:

: phi^{(k+1)}_i = (1-omega)phi^{(k)}_i+frac{omega}{a_{ii left(b_i - sum_{j=1}^{i-1} a_{ij}phi^{(k+1)}_j - sum_{j=i+1}^n a_{ij}phi^{(k)}_j ight),, i=1,2,ldots,n.

Algorithm

Inputs: "A" , "b", ω
Output: φ

Choose an initial guess phi to the solution
repeat until convergence:for "i" from 1 until "n" do::σ ← 0::for "j" from 1 until "i" − 1 do:::σ ← σ + a"ij" φ"j"::end ("j"-loop)::for "j" from "i" + 1 until "n" do
:::σ ← σ + a"ij" φ"j"::end ("j"-loop)
:: phi_i leftarrow (1-omega)phi_i + frac{omega}{a_{ii (b_i - sigma) :end ("i"-loop):check if convergence is reachedend (repeat)

Other applications of the method

A similar technique can be used for any iterative method. Values of omega>1 are used to speedup convergence of a slow-converging process, while values of omega<1 are often used to help establish convergence of a diverging iterative process.

There are various methods that adaptively set the relaxation parameter omega based on the observed behavior of the converging process. Usually they help to reach a super-linear convergence for some problems but fail for the others

References

*CFDWiki|name=Successive_over-relaxation_method_-_SOR
*Yousef Saad, " [http://www-users.cs.umn.edu/%7Esaad/books.html Iterative Methods for Sparse Linear Systems] ", 1st edition, PWS, 1996.
* [http://www.netlib.org/utk/papers/templates/node11.html Netlib] 's copy of Jack Dongarra's section of "Templates for the Solution of Linear Systems"
* [http://www.cs.utexas.edu/users/young/ Web page of David M. Young]
* [http://www.cs.utexas.edu/users/young/david_young_thesis.pdf PhD Thesis of David M. Young] (Harvard, 1950)

External links

* [http://math.fullerton.edu/mathews/n2003/SORmethodMod.html Module for the SOR Method]
* [http://www.redbrick.dcu.ie/~hyperion/files/Project.pdf The Successive Over–Relaxation (S.O.R.) Algorithm & its Application to Numerical Solutions of Elliptic Partial Differential Equations]


Wikimedia Foundation. 2010.

Игры ⚽ Нужен реферат?

Look at other dictionaries:

  • Relaxation (approximation) — In mathematical optimization and related fields, relaxation is a modeling strategy. A relaxation is an approximation of a difficult problem by a nearby problem that is easier to solve. A solution of the relaxed problem provides information about… …   Wikipedia

  • Relaxation method — In numerical mathematics, the relaxation method is a method for obtaining numerical approximations to the solutions of systems of equations, including certain types of elliptic partial differential equations, in particular Laplace s equation and… …   Wikipedia

  • Méthode de surrelaxation successive — En analyse numérique, la méthode de surrelaxation successive est une variante de la méthode de Gauss Seidel pour résoudre un système d équations linéaires. La convergence de cet algorithme est généralement plus rapide. Une approche similaire peut …   Wikipédia en Français

  • SOR — Successive Over Relaxation (mathematics) Contributor: CASI …   NASA Acronyms

  • David M. Young, Jr. — David M. Young, Jr. Born October 20, 1923(1923 10 20) Quincy, Massachusetts …   Wikipedia

  • List of numerical analysis topics — This is a list of numerical analysis topics, by Wikipedia page. Contents 1 General 2 Error 3 Elementary and special functions 4 Numerical linear algebra …   Wikipedia

  • Preconditioner — In linear algebra and numerical analysis, a preconditioner P of a matrix A is a matrix such that P −1 A has a smaller condition number than A .Preconditioners are useful when using an iterative method to solve a large, sparse linear system : Ax …   Wikipedia

  • Iterative method — In computational mathematics, an iterative method is a mathematical procedure that generates a sequence of improving approximate solutions for a class of problems. A specific implementation of an iterative method, including the termination… …   Wikipedia

  • List of mathematics articles (S) — NOTOC S S duality S matrix S plane S transform S unit S.O.S. Mathematics SA subgroup Saccheri quadrilateral Sacks spiral Sacred geometry Saddle node bifurcation Saddle point Saddle surface Sadleirian Professor of Pure Mathematics Safe prime Safe… …   Wikipedia

  • SOR-Verfahren — Das „Successive Over Relaxation“ Verfahren oder SOR Verfahren ist ein Algorithmus der numerischen Mathematik zur näherungsweisen Lösung von linearen Gleichungssystemen. Es ist, wie das Gauß Seidel Verfahren und das Jacobi Verfahren, ein… …   Deutsch Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”