Adaptive stepsize

Adaptive stepsize

Adaptive stepsize is a technique in numerical analysis used for many problems, but mainly for integration; This can be 'normal' integration (that is 'quadrature'), or the process of solving an ordinary differential equation. This article focuses on the latter. For an explanation of adaptive stepsize in normal integration, see for example Romberg's method.

As usual, an initial value problem is stated:

: vec{y},'(t) = vec{f}(t,vec{y}(t)), qquad vec{y}(a)=vec{y_a}

Here, it is made clear that "y" and "f" can be vectors, as they will be when dealing with a system of coupled differential equations. In the rest of the article, this fact will be implicit.

Suppose we are interested in obtaining a solution at point t=b, given a function f(t,y), an initial point t=a and an initial solution y_a=y(a). Of course a numerical solution will generally have an error, so we assume y_b + epsilon = y(b), where epsilon is the error.

In order to focus on the adaptive stepsize concept, the simplest of integration methods is used: the Euler method. I must emphasize that the Euler method is almost exclusively useful for educational purposes. In reality, higher order (Runge-Kutta) methods are used.

To refresh your memory, the Euler method is derived. Combining Taylor's theorem with the intermediate value theorem and the fact that y'(t)=f(t,y):

: y(t+h)=y(t)+hf(t,y(t))+frac{h^2}{2}f'(eta,y(eta)), qquad t le eta le t+h

Which leads to the Euler method:

: y_{n+1}^{(0)}=y_n+hf(t_n,y_n)

And its local truncation error

: au_{n+1}^{(0)}=ch^2: y_{n+1}^{(0)} + au_{n+1}^{(0)}=y(t+h)

We mark this solution and its error with a (0). Since c is not known to us in the general case (it depends on the derivatives of f), in order to say something useful about the error, a second solution should be created, using a stepsize that is smaller. For example half the original stepsize. Note that we have to apply Euler's method twice now, meaning we get two times the local error (in the worst case). Our new, and presumably more accurate solution is marked with a (1).

: y_{n+frac{1}{2=y_n+frac{h}{2}f(t_n,y_n): y_{n+1}^{(1)}=y_{n+frac{1}{2+frac{h}{2}f(t_{n+frac{1}{2,y_{n+frac{1}{2): au_{n+1}^{(1)}=c(frac{h}{2})^2+c(frac{h}{2})^2=2c(frac{h}{2})^2=frac{1}{2}ch^2=frac{1}{2} au_{n+1}^{(0)}: y_{n+1}^{(1)} + au_{n+1}^{(1)}=y(t+h)

Here, we assume error factor c is constant over the interval [t, t+h] . In reality its rate of change is proportional to y^{(3)}(t). Subtracting solutions gives the error estimate:

: y_{n+1}^{(1)}-y_{n+1}^{(0)} = au_{n+1}^{(1)}

This local error estimate is third order accurate.

The local error estimate can be used to decide how stepsize h should be modified to achieve the desired accuracy. For example, if a local tolerance of tol is allowed, we could let h evolve like:

: h ightarrow 0.9 imes h imes frac{tol}

The 0.9 is a safety factor to ensure success on the next try. This should, in principle give an error of about 0.9 imes tol in the next try. If | au_{n+1}^{(1)}| < tol, we consider the step successful, and the error estimate is used to improve the solution:

: y_{n+1}^{(2)} = y_{n+1}^{(1)} + au_{n+1}^{(1)}

This solution is actually third order accurate in the local scope (second order in the global scope), but since there is no error estimate for it, this doesn't help in reducing the number of steps; it's just gravy on our potatoes. By the way, this technique is called Richardson extrapolation

Beginning with an initial stepsize of h=b-a, this theory facilitates our controllable integration of the ODE from point a to b, using an optimal number of steps given a local error tolerance.

Similar methods can be developed for higher order methods, such as the Runge-Kutta 4th order method. Also, a global error tolerance can be achieved by scaling the local error to global scope. However, you might end up with a stepsize that is prohibitively small, especially using this Euler based method.

If you are interested in adaptive stepsize methods that use a so called 'embedded' error estimate, see Fehlberg, Cash-Karp and Dormand-Price. These methods are considered to be more computationally efficient, but have lower accuracy in their error estimates.

References

*William H. Press, Saul A. Teukolsky, William T. Vetterling, Brian P. Flannery, "Numerical Recipes in C", Second Edition, CAMBRIDGE UNIVERSITY PRESS, 1992. ISBN 0-521-43108-5
*Kendall E. Atkinson, "Numerical Analysis", Second Edition, John Wiley & Sons, 1989. ISBN 0-471-62489-6
*Silvana Ilie, Gustaf Söderlind, Robert Corless, "Adaptivity and computational complexity in the numerical solution of ODEs", J. Complexity, 24(3) (2008) 341-361.


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Adaptive mesh refinement — This article is about the use of adaptive meshing in numerical analysis. See Subdivision surface for the use of adaptive techniques in Computer Graphics modelling. In numerical analysis, adaptive mesh refinement is a method of adaptive meshing.… …   Wikipedia

  • List of Runge–Kutta methods — Runge–Kutta methods are methods for the numerical solution of the ordinary differential equation:frac{d y}{d t} = f(t, y),which take the form:y {n+1} = y n + h sum {i=1}^s b i k i,:k i = fleft(t n + c i h, y n + h sum {j = 1}^s a {ij} k j… …   Wikipedia

  • List of numerical analysis topics — This is a list of numerical analysis topics, by Wikipedia page. Contents 1 General 2 Error 3 Elementary and special functions 4 Numerical linear algebra …   Wikipedia

  • List of mathematics articles (A) — NOTOC A A Beautiful Mind A Beautiful Mind (book) A Beautiful Mind (film) A Brief History of Time (film) A Course of Pure Mathematics A curious identity involving binomial coefficients A derivation of the discrete Fourier transform A equivalence A …   Wikipedia

  • Cash–Karp method — In numerical analysis, the Cash–Karp method is a method for solving ordinary differential equations (ODEs). The method is a member of the Runge–Kutta family of ODE solvers. More specifically, it uses six function evaluations to calculate fourth… …   Wikipedia

  • Dormand–Prince method — In numerical analysis, the Dormand–Prince method, or DOPRI method, is a method for solving ordinary differential equations (Dormand Prince 1980). The method is a member of the Runge–Kutta family of ODE solvers. More specifically, it uses six… …   Wikipedia

  • Runge–Kutta methods — In numerical analysis, the Runge–Kutta methods (pronounced IPA|/ˌʀuŋgeˈkuta/) are an important family of implicit and explicit iterative methods for the approximation of solutions of ordinary differential equations. These techniques were… …   Wikipedia

  • Numerical continuation — is a method of computing approximate solutions of a system of parameterized nonlinear equations, The parameter λ is usually a real scalar, and the solution an n vector. For a fixed parameter value λ,, maps Euclidean n space into itself. Often the …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”