In mathematics, the additive Schwarz method, named after Hermann Schwarz, solves a boundary value problem for a partial differential equation approximately by splitting it into boundary value problems on smaller domains and adding the results.

Overview

Partial differential equations (PDEs) are used in all hard sciences to model phenomena. For the purpose of exposition, we give an example physical problem and the accompanying boundary value problem (BVP). Even if the reader is unfamiliar with the notation, the purpose is merely to show what a BVP looks like when written down.

:(Model problem) The heat distribution in a square metal plate such that the left edge is kept at 1 degree, and the other edges are kept at 0 degree, after letting it sit for a long period of time satisfies the following boundary value problem:

::"f""xx"("x","y") + "f""yy"("x","y") = 0 ::"f"(0,"y") = 1; "f"("x",0) = "f"("x",1) = "f"(1,"y") = 0

:where "f" is the unknown function, "f""xx" and "f""yy" denote the second partial derivatives with respect to "x" and "y", respectively.

Here, the domain is the square [0,1] &times; [0,1] .

This particular problem can be solved exactly on paper, so there is no need for a computer. However, this is an exceptional case, and most BVPs cannot be solved exactly. The only possibility is to use a computer to find an approximate solution.

Solving on a computer

A typical way of doing this is to "sample" "f" at regular intervals in the square [0,1] &times; [0,1] . For instance, we could take 8 samples in the "x" direction at "x" = 0.1, 0.2, ..., 0.8 and 0.9, and 8 samples in the "y" direction at similar coordinates. We would then have 64 samples of the square, at places like (0.2,0.8) and (0.6,0.6). The goal of the computer program would be to calculate the value of "f" at those 64 points, which seems easier than finding an abstract function of the square.

There are some difficulties, for instance it is not possible to calculate "f""xx"(0.5,0.5) knowing "f" at only 64 points in the square. To overcome this, one uses some sort of numerical approximation of the derivatives, see for instance the finite element method or finite differences. We ignore these difficulties and concentrate on another aspect of the problem.

Solving linear problems

Whichever method we choose to solve this problem, we will need to solve a large linear system of equations. The reader may recall linear systems of equations from high school, they look like this:

:2"a" + 5"b" = 12 (*):6"a" − 3"b" = −3

This is a system of 2 equations in 2 unknowns ("a" and "b"). If we solve the BVP above in the manner suggested, we will need to solve a system of 64 equations in 64 unknowns. This is not a hard problem for modern computers, but if we use a larger number of samples, even modern computers cannot solve the BVP very efficiently.

Domain decomposition

Which brings us to domain decomposition methods. If we split the domain [0,1] &times; [0,1] into two "subdomains" [0,0.5] &times; [0,1] and [0.5,1] &times; [0,1] , each has only half of the sample points. So we can try to solve a version of our model problem on each subdomain, but this time each subdomain has only 32 sample points. Finally, given the solutions on each subdomain, we can attempt to reconcile them to obtain a solution of the original problem on [0,1] &times; [0,1] .

Size of the problems

In terms of the linear systems, we're trying to split the system of 64 equations in 64 unknowns into two systems of 32 equations in 32 unknowns. This would be a clear gain, for the following reason. Looking back at system (*), we see that there are 6 important pieces of information. They are the coefficients of "a" and "b" (2,5 on the first line and 6,−3 on the second line), and the right hand side (which we write as 12,−3). On the other hand, if we take two "systems" of 1 equation in 1 unknown, it might look like this:

:System 1: 3"a" = 15:System 2: 6"b" = −4

We see that this system has only 4 important pieces of information. This means that a computer program will have an easier time solving two 1&times;1 systems than solving a single 2&times;2 system, because the pair of 1&times;1 systems are simpler than the single 2&times;2 system. While the 64&times;64 and 32&times;32 systems are too large to illustrate here, we could say by analogy that the 64&times;64 system has 4160 pieces of information, while the 32&times;32 systems each have 1056, or roughly a quarter of the 64&times;64 system.

Domain decomposition algorithm

Unfortunately, for technical reason it is usually not possible to split our grid of 64 points (a 64&times;64 system of linear equations) into two grids of 32 points (two 32&times;32 systems of linear equations) and obtain an answer to the 64&times;64 system. Instead, the following algorithm is what actually happens:

:1) Begin with an approximate solution of the 64&times;64 system.:2) From the 64&times;64 system, create two 32&times;32 systems to improve the approximate solution.:3) Solve the two 32&times;32 systems.:4) Put the two 32&times;32 solutions "together" to improve the approximate solution to the 64&times;64 system.:5) If the solution isn't very good yet, repeat from 2.

There are two ways in which this can be better than solving the base 64&times;64 system. First, if the number of repetitions of the algorithm is small, solving two 32&times;32 systems may be more efficient than solving a 64&times;64 system. Second, the two 32&times;32 systems need not be solved on the same computer, so this algorithm can be run in "parallel" to use the power of multiple computers.

In fact, solving two 32&times;32 systems instead of a 64&times;64 system on a single computer (without using parallelism) is unlikely to be efficient. However, if we use more than two subdomains, the picture can change. For instance, we could use four 16&times;16 problems, and there's a chance that solving these will be better than solving a single 64&times;64 problem even if the domain decomposition algorithm needs to iterate a few times.

A technical example

Here we assume that the reader is familiar with partial differential equations.

We will be solving the partial differential equation

:"u""xx" + "u""yy" = "f" (**)

The boundary condition is boundedness at infinity.

We decompose the domain R² into two overlapping subdomains H1 = (− ∞,1] &times; R and H2 = [0,+ ∞) &times; R. In each subdomain, we will be solving a BVP of the form:

:"u"( "j" )"xx" + "u"( "j" )"yy" = "f" in H"j":"u"( "j" )("x""j","y") = "g"("y")

where "x"1 = 1 and "x"2 = 0 and taking boundedness at infinity as the other boundary condition. We denote the solution "u"( "j" ) of the above problem by S("f","g"). Note that S is bilinear.

The Schwarz algorithm proceeds as follows:

#Start with approximate solutions "u"( 1 )0 and "u"( 2 )0 of the PDE in subdomains H1 and H2 respectively. Initialize "k" to 1.
#Calculate "u"( "j" )"k" + 1 = S("f","u"(3 − "j")"k"("x""j")) with "j" = 1,2.
#Increase "k" by one and repeat 2 until sufficient precision is achieved.

* Multigrid method

References

* Barry Smith, Petter Bjørstad, William Gropp, Domain Decomposition, Parallel Multilevel Methods for Elliptic Partial Differential Equations, Cambridge University Press 1996

* Andrea Toselli and Olof Widlund, Domain Decomposition Methods - Algorithms and Theory, Springer Series in Computational Mathematics, Vol. 34, 2004

* [http://www.ddm.org The official Domain Decomposition Methods page]

Wikimedia Foundation. 2010.

### Look at other dictionaries:

• Abstract additive Schwarz method — In mathematics, the abstract additive Schwarz method, named after Hermann Schwarz, is an abstract version of the additive Schwarz method, formulated only in terms of linear algebra without reference to domains, subdomains, etc. Many if not all… …   Wikipedia

• Schwarz alternating method — In mathematics, the Schwarz alternating method, named after Hermann Schwarz, is an iterative method to find the solution of a partial differential equations on a domain which is the union of two overlapping subdomains, by solving the equation on… …   Wikipedia

• Method of lines — The method of lines (MOL, NMOL, NUMOL) (Schiesser, 1991; Hamdi, et al., 2007; Schiesser, 2009 ) is a technique for solving partial differential equations (PDEs) in which all but one dimension is discretized. MOL allows standard, general purpose… …   Wikipedia

• Spectral method — Spectral methods are a class of techniques used in applied mathematics and scientific computing to numerically solve certain Dynamical Systems, often involving the use of the Fast Fourier Transform. Where applicable, spectral methods have… …   Wikipedia

• Multigrid method — Multigrid (MG) methods in numerical analysis are a group of algorithms for solving differential equations using a hierarchy of discretizations. They are an example of a class of techniques called multiresolution methods, very useful in (but not… …   Wikipedia

• Crank–Nicolson method — In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations.[1] It is a second order method in time, implicit in time, and is numerically …   Wikipedia

• Collocation method — In mathematics, a collocation method is a method for the numerical solution of ordinary differential equations, partial differential equations and integral equations. The idea is to choose a finite dimensional space of candidate solutions… …   Wikipedia

• Discontinuous Galerkin method — Discontinuous Galerkin methods (DG methods) in mathematics form a class of numerical methods for solving partial differential equations. They combine features of the finite element and the finite volume framework and have been successfully… …   Wikipedia

• Neumann–Dirichlet method — In mathematics, the Neumann–Dirichlet method is a domain decomposition preconditioner which involves solving Neumann boundary value problem on one subdomain and Dirichlet boundary value problem on another, adjacent across the interface between… …   Wikipedia

• Domain decomposition methods — Domain dec …   Wikipedia