- Adaptive stepsize
Adaptive stepsize is a technique in
numerical analysis used for many problems, but mainly for integration; This can be 'normal' integration (that is 'quadrature'), or the process of solving anordinary differential equation . This article focuses on the latter. For an explanation of adaptive stepsize in normal integration, see for exampleRomberg's method .As usual, an initial value problem is stated:
:
Here, it is made clear that "y" and "f" can be vectors, as they will be when dealing with a system of coupled differential equations. In the rest of the article, this fact will be implicit.
Suppose we are interested in obtaining a solution at point , given a function , an initial point and an initial solution . Of course a numerical solution will generally have an error, so we assume , where is the error.
In order to focus on the adaptive stepsize concept, the simplest of integration methods is used: the
Euler method . I must emphasize that the Euler method is almost exclusively useful for educational purposes. In reality, higher order (Runge-Kutta ) methods are used.To refresh your memory, the Euler method is derived. Combining
Taylor's theorem with the intermediate value theorem and the fact that ::
Which leads to the Euler method:
:
And its local truncation error
: :
We mark this solution and its error with a . Since is not known to us in the general case (it depends on the derivatives of ), in order to say something useful about the error, a second solution should be created, using a stepsize that is smaller. For example half the original stepsize. Note that we have to apply Euler's method twice now, meaning we get two times the local error (in the worst case). Our new, and presumably more accurate solution is marked with a .
: : : :
Here, we assume error factor is constant over the interval . In reality its rate of change is proportional to . Subtracting solutions gives the error estimate:
:
This local error estimate is third order accurate.
The local error estimate can be used to decide how stepsize should be modified to achieve the desired accuracy. For example, if a local tolerance of is allowed, we could let h evolve like:
:
The is a safety factor to ensure success on the next try. This should, in principle give an error of about in the next try. If , we consider the step successful, and the error estimate is used to improve the solution:
:
This solution is actually third order accurate in the local scope (second order in the global scope), but since there is no error estimate for it, this doesn't help in reducing the number of steps; it's just gravy on our potatoes. By the way, this technique is called
Richardson extrapolation Beginning with an initial stepsize of , this theory facilitates our controllable integration of the ODE from point to , using an optimal number of steps given a local error tolerance.
Similar methods can be developed for higher order methods, such as the Runge-Kutta 4th order method. Also, a global error tolerance can be achieved by scaling the local error to global scope. However, you might end up with a stepsize that is prohibitively small, especially using this Euler based method.
If you are interested in adaptive stepsize methods that use a so called 'embedded' error estimate, see
Fehlberg ,Cash-Karp andDormand-Price . These methods are considered to be more computationally efficient, but have lower accuracy in their error estimates.References
*William H. Press, Saul A. Teukolsky, William T. Vetterling, Brian P. Flannery, "Numerical Recipes in C", Second Edition, CAMBRIDGE UNIVERSITY PRESS, 1992. ISBN 0-521-43108-5
*Kendall E. Atkinson, "Numerical Analysis", Second Edition, John Wiley & Sons, 1989. ISBN 0-471-62489-6
*Silvana Ilie, Gustaf Söderlind, Robert Corless, "Adaptivity and computational complexity in the numerical solution of ODEs", J. Complexity, 24(3) (2008) 341-361.
Wikimedia Foundation. 2010.