Divided differences

Divided differences

In mathematics divided differences is a recursive division process.

The method can be used to calculate the coefficients in the interpolation polynomial in the Newton form.

Contents

Definition

Given n data points

(x_0, y_0),\ldots,(x_{n-1}, y_{n-1})

The forward divided differences are defined as:

[y_\nu] := y_\nu, \qquad \nu \in \{ 0,\ldots,n-1\}
[y_\nu,\ldots,y_{\nu+j}] := \frac{[y_{\nu+1},\ldots , y_{\nu+j}] - [y_{\nu},\ldots , y_{\nu+j-1}]}{x_{\nu+j}-x_\nu}, \qquad \nu\in\{0,\ldots,n-1-j\},\ j\in\{1,\ldots,n-1\}.

The backward divided differences are defined as:

[y_\nu] := y_{\nu},\qquad \nu \in \{ 0,\ldots,n-1\}
[y_\nu,\ldots,y_{\nu-j}] := \frac{[y_\nu,\ldots , y_{\nu-j+1}] - [y_{\nu-1},\ldots , y_{\nu-j}]}{x_\nu - x_{\nu-j}}, \qquad \nu\in\{j,\ldots,n-1\},\ j\in\{1,\ldots,n-1\}.

Notation

If the data points are given as a function ƒ,

(x_0, f(x_0)),\ldots,(x_{n-1}, f(x_{n-1}))

one sometimes writes

f[x_\nu] := f(x_{\nu}), \qquad \nu \in \{ 0,\ldots,n-1 \}
f[x_\nu,\ldots,x_{\nu+j}] := \frac{f[x_{\nu+1},\ldots , x_{\nu+j}] - f[x_\nu,\ldots , x_{\nu+j-1}]}{x_{\nu+j}-x_\nu}, \qquad \nu\in\{0,\ldots,n-1-j\},\ j\in\{1,\ldots,n-1\}.

Several notations for the divided difference of the function ƒ on the nodes x0, ..., xn are used:

[x_0,\ldots,x_n]f,
[x_0,\ldots,x_n;f],
D[x_0,\ldots,x_n]f

etc.

Example

For the first few values of ν, this yields


\begin{align}
  \mathopen[y_0] &= y_0 \\
  \mathopen[y_0,y_1] &= \frac{y_1-y_0}{x_1-x_0} \\
  \mathopen[y_0,y_1,y_2]
&= \frac{\mathopen[y_1,y_2]-\mathopen[y_0,y_1]}{x_2-x_0}
 =  \frac{\frac{y_2-y_1}{x_2-x_1}-\frac{y_1-y_0}{x_1-x_0}}{x_2-x_0}
 = \frac{y_2-y_1}{(x_2-x_1)(x_2-x_0)}-\frac{y_1-y_0}{(x_1-x_0)(x_2-x_0)}
\\
  \mathopen[y_0,y_1,y_2,y_3] &= \frac{\mathopen[y_1,y_2,y_3]-\mathopen[y_0,y_1,y_2]}{x_3-x_0}
\end{align}

To make the recursive process more clear the divided differences can be put in a tabular form


\begin{matrix}
x_0 & y_0 = [y_0] &           &               & \\
        &       & [y_0,y_1] &               & \\
x_1 & y_1 = [y_1] &           & [y_0,y_1,y_2] & \\
        &       & [y_1,y_2] &               & [y_0,y_1,y_2,y_3]\\
x_2 & y_2 = [y_2] &           & [y_1,y_2,y_3] & \\
        &       & [y_2,y_3] &               & \\
x_3 & y_3 = [y_3] &           &               & \\
\end{matrix}

Properties

(f+g)[x_0,\dots,x_n] = f[x_0,\dots,x_n] + g[x_0,\dots,x_n]
(\lambda\cdot f)[x_0,\dots,x_n] = \lambda\cdot f[x_0,\dots,x_n]
(f\cdot g)[x_0,\dots,x_n] = f[x_0]\cdot g[x_0,\dots,x_n] + f[x_0,x_1]\cdot g[x_1,\dots,x_n] + \dots + f[x_0,\dots,x_n]\cdot g[x_n]
  • From the mean value theorem for divided differences it follows that
\lim_{(x_0,\dots,x_n)\to(\xi,\dots,\xi)} f[x_0,\dots,x_n] = \frac{f^{(n)}(\xi)}{n!}

Matrix form

The divided difference scheme can be put into an upper triangular matrix. Let T_f(x_0,\dots,x_n)=
\begin{pmatrix}
f[x_0] & f[x_0,x_1] & f[x_0,x_1,x_2] & \ldots & f[x_0,\dots,x_n] \\
0 & f[x_1] & f[x_1,x_2] & \ldots & f[x_1,\dots,x_n] \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
0 & \ldots & 0 & 0 & f[x_n]
\end{pmatrix}.

Then it holds

  • Tf + gx = Tfx + Tgx
  • T_{f\cdot g} x = T_f x \cdot T_g x
This follows from the Leibniz rule. It means that multiplication of such matrices is commutative. Summarised, the matrices of divided difference schemes with respect to the same set of nodes form a commutative ring.
  • Since Tfx is a triangular matrix, its eigenvalues are obviously  f(x_0), \dots, f(x_n) .
  • Let δξ be a Kronecker delta-like function, that is
\delta_\xi(t) = \begin{cases}1 &: t=\xi , \\0 &: \mbox{else}.\end{cases}
Obviously f\cdot \delta_\xi = f(\xi)\cdot \delta_\xi, thus δξ is an eigenfunction of the pointwise function multiplication. That is T_{\delta_{x_i}} x is somehow an "eigenmatrix" of Tfx:  T_f x \cdot T_{\delta_{x_i}} x = f(x_i) \cdot T_{\delta_{x_i}} x . However, all columns of T_{\delta_{x_i}} x are multiples of each other, the matrix rank of T_{\delta_{x_i}} x is 1. So you can compose the matrix of all eigenvectors from the i-th column of each T_{\delta_{x_i}} x. Denote the matrix of eigenvectors with Ux. Example
 U(x_0,x_1,x_2,x_3) = \begin{pmatrix}
1 & \frac{1}{(x_1-x_0)} & \frac{1}{(x_2-x_0)\cdot(x_2-x_1)} & \frac{1}{(x_3-x_0)\cdot(x_3-x_1)\cdot(x_3-x_2)} \\
0 & 1 & \frac{1}{(x_2-x_1)} & \frac{1}{(x_3-x_1)\cdot(x_3-x_2)} \\
0 & 0 & 1 & \frac{1}{(x_3-x_2)} \\
0 & 0 & 0 & 1
\end{pmatrix}
The diagonalization of Tfx can be written as
 U x \cdot \operatorname{diag}(f(x_0),\dots,f(x_n)) = T_f x \cdot U x .

Alternative definitions

Expanded form


\begin{align}
f[x_0] &= f(x_0) \\
f[x_0,x_1] &= \frac{f(x_0)}{(x_0-x_1)} + \frac{f(x_1)}{(x_1-x_0)} \\
f[x_0,x_1,x_2] &= \frac{f(x_0)}{(x_0-x_1)\cdot(x_0-x_2)} + \frac{f(x_1)}{(x_1-x_0)\cdot(x_1-x_2)} + \frac{f(x_2)}{(x_2-x_0)\cdot(x_2-x_1)} \\
f[x_0,x_1,x_2,x_3] &= \frac{f(x_0)}{(x_0-x_1)\cdot(x_0-x_2)\cdot(x_0-x_3)} + \frac{f(x_1)}{(x_1-x_0)\cdot(x_1-x_2)\cdot(x_1-x_3)} + \frac{f(x_2)}{(x_2-x_0)\cdot(x_2-x_1)\cdot(x_2-x_3)} + \frac{f(x_3)}{(x_3-x_0)\cdot(x_3-x_1)\cdot(x_3-x_2)} \\
f[x_0,\dots,x_n] &=
\sum_{j=0}^{n} \frac{f(x_j)}{\prod_{k\in\{0,\dots,n\}\setminus\{j\}} (x_j-x_k)}
\end{align}

With help of a polynomial function q with q(\xi) = (\xi-x_0) \cdots (\xi-x_n) this can be written as


f[x_0,\dots,x_n] = \sum_{j=0}^{n} \frac{f(x_j)}{q'(x_j)}.

Partial fractions

You can represent partial fractions using the expanded form of divided differences. (This does not simplify computation, but is interesting in itself.) If p and q are polynomial functions, where \mathrm{deg}\ p < \mathrm{deg}\ q and q is given in terms of linear factors by q(\xi) = (\xi-x_1)\cdot \dots \cdot(\xi-x_n), then it follows from partial fraction decomposition that

\frac{p(\xi)}{q(\xi)} = \left(t\to\frac{p(t)}{\xi-t}\right)[x_1,\dots,x_n].

If limits of the divided differences are accepted, then this connection does also hold, if some of the xj coincide.

If f is a polynomial function with arbitrary degree and it is decomposed by f(x) = p(x) + q(x)\cdot d(x) using polynomial division of f by q, then

\frac{p(\xi)}{q(\xi)} = \left(t\to\frac{f(t)}{\xi-t}\right)[x_1,\dots,x_n].

Peano form

The divided differences can be expressed as

f[x_0,\ldots,x_n] = \frac{1}{n!} \int_{x_0}^{x_n} f^{(n)}(t)B_{n-1}(t) \, dt

where Bn − 1 is a B-spline of degree n − 1 for the data points x_0,\dots,x_n and f(n) is the n-th derivative of the function f.

This is called the Peano form of the divided differences and Bn − 1 is called the Peano kernel for the divided differences, both named after Giuseppe Peano.

Taylor form

First order

If nodes are cumulated, then the numerical computation of the divided differences is inaccurate, because you divide almost two zeros, each of which with a high relative error due to differences of similar values. However we know, that difference quotients approximate the derivative and vice versa:

\frac{f(y)-f(x)}{y-x} \approx f'(x) for x \approx y

This approximation can be turned into an identity whenever Taylor's theorem applies.

f(y) = f(x) + f'(x)\cdot(y-x) + f''(x)\cdot\frac{(y-x)^2}{2!} + f'''(x)\cdot\frac{(y-x)^3}{3!} + \dots
\Rightarrow \frac{f(y) - f(x)}{y-x} = f'(x) + f''(x)\cdot\frac{y-x}{2!} + f'''(x)\cdot\frac{(y-x)^2}{3!} + \dots

You can eliminate the odd powers of yx by expanding the Taylor series at the center between x and y:

x = mh,y = m + h, that is m = \frac{x+y}{2}, h=\frac{y-x}{2}
f(m+h) = f(m) + f'(m)\cdot h + f''(m)\cdot\frac{h^2}{2!} + f'''(m)\cdot\frac{h^3}{3!} + \dots
f(m-h) = f(m) - f'(m)\cdot h + f''(m)\cdot\frac{h^2}{2!} - f'''(m)\cdot\frac{h^3}{3!} + \dots
\frac{f(y) - f(x)}{y-x} = \frac{f(m+h) - f(m-h)}{2\cdot h} =
 f'(m) + f'''(m)\cdot\frac{h^2}{3!} + \dots

Higher order

The Taylor series or any other representation with function series can in principle be used to approximate divided differences. Taylor series are infinite sums of power functions. The mapping from a function f to a divided difference f[x_0,\dots,x_n] is a linear functional. We can as well apply this functional to the function summands.

Express power notation with an ordinary function: pn(x) = xn.

Regular Taylor series is a weighted sum of power functions: f = f(0)\cdot p_0 + f'(0)\cdot p_1 + \frac{f''(0)}{2!}\cdot p_2 + \frac{f'''(0)}{3!}\cdot p_3 + \dots

Taylor series for divided differences: f[x_0,\dots,x_n] = f(0)\cdot p_0[x_0,\dots,x_n] + f'(0)\cdot p_1[x_0,\dots,x_n] + \frac{f''(0)}{2!}\cdot p_2[x_0,\dots,x_n] + \frac{f'''(0)}{3!}\cdot p_3[x_0,\dots,x_n] + \dots

We know that the first n terms vanish, because we have a higher difference order than polynomial order, and in the following term the divided difference is one:


\begin{array}{llcl}
\forall j<n & p_j[x_0,\dots,x_n] &=& 0 \\
 & p_n[x_0,\dots,x_n] &=& 1 \\
 & p_{n+1}[x_0,\dots,x_n] &=& x_0 + \dots + x_n \\
 & p_{n+m}[x_0,\dots,x_n] &=& \sum_{a\in\{0,\dots,n\}^m \text{ with } a_1 \le a_2 \le \dots \le a_m} \prod_{j\in a} x_j. \\
\end{array}

It follows that the Taylor series for the divided difference essentially starts with \frac{f^{(n)}(0)}{n!} which is also a simple approximation of the divided difference, according to the mean value theorem for divided differences.

If we would have to compute the divided differences for the power functions in the usual way, we would encounter the same numerical problems that we had when computing the divided difference of f. The nice thing is, that there is a simpler way. It holds


t^n = (1 - x_0\cdot t) \dots \cdot (1 - x_n\cdot t) \cdot
(p_0[x_0,\dots,x_n] + p_1[x_0,\dots,x_n]\cdot t + p_2[x_0,\dots,x_n]\cdot t^2 + \dots) .

Consequently we can compute the divided differences of pn by a division of formal power series. See how this reduces to the successive computation of powers when we compute pn[h] for several n.

Cf. an implementation in Haskell.

If you need to compute a whole divided difference scheme with respect to a Taylor series, see the section about divided differences of power series.

Polynomials and power series

Divided differences of polynomials are particularly interesting, because they can benefit from the Leibniz rule. The matrix J with


J=
\begin{pmatrix}
x_0 & 1 & 0 & 0 & \cdots & 0 \\
0 & x_1 & 1 & 0 & \cdots & 0 \\
0 & 0 & x_2 & 1 &        & 0 \\
\vdots & \vdots & & \ddots & \ddots & \\
0 & 0 & 0 & 0 &          & x_n
\end{pmatrix}

contains the divided difference scheme for the identity function with respect to the nodes x_0,\dots,x_n, thus Jn contains the divided differences for the power function with exponent n. Consequently you can obtain the divided differences for a polynomial function φ(p) with respect to the polynomial p by applying p (more precisely: its corresponding matrix polynomial function φM(p)) to the matrix J.

\varphi(p)(\xi) = a_0 + a_1\cdot \xi + \dots + a_n\cdot \xi^n
\varphi_{\mathrm{M}}(p)(J) = a_0 + a_1\cdot J + \dots + a_n\cdot J^n
= \begin{pmatrix}
\varphi(p)[x_0] & \varphi(p)[x_0,x_1] & \varphi(p)[x_0,x_1,x_2] & \ldots & \varphi(p)[x_0,\dots,x_n] \\
0 & \varphi(p)[x_1] & \varphi(p)[x_1,x_2] & \ldots & \varphi(p)[x_1,\dots,x_n] \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
0 & \ldots & 0 & 0 & \varphi(p)[x_n]
\end{pmatrix}

This is known as Opitz' formula. [1] [2]

Now consider increasing the degree of p to infinity, i.e. turn the Taylor polynomial to a Taylor series. Let f be a function which corresponds to a power series. You can compute a divided difference scheme by computing the according matrix series applied to J. If the nodes x_0,\dots,x_n are all equal, then J is a Jordan block and computation boils down to generalizing a scalar function to a matrix function using Jordan decomposition[disambiguation needed ].

Forward differences

When the data points are equidistantly distributed we get the special case called forward differences. They are easier to calculate than the more general divided differences.

Definition

Given n data points

(x_0, y_0),\ldots,(x_{n-1}, y_{n-1})

with

x_{\nu} = x_0 + \nu h \mbox{ , } h > 0 \mbox{ , } \nu=0,\ldots,n-1

the divided differences can be calculated via forward differences defined as

\triangle^{(0)}y_{i} := y_{i}
\triangle^{(k)}y_{i} := \triangle^{(k-1)}y_{i+1} - \triangle^{(k-1)}y_{i} \mbox{ , } k \ge 1.

Example


\begin{matrix}
y_0 &               &                   &                  \\
    & \triangle y_0 &                   &                  \\
y_1 &               & \triangle^{2} y_0 &                  \\
    & \triangle y_1 &                   & \triangle^{3} y_0\\
y_2 &               & \triangle^{2} y_1 &                  \\
    & \triangle y_2 &                   &                  \\
y_3 &               &                   &                  \\
\end{matrix}

Computer Program

Notes

  1. ^ de Boor, Carl, Divided Differences, Surv. Approx. Theory 1 (2005), 46--69
  2. ^ Opitz, G. Steigungsmatrizen, Z. Angew. Math. Mech. (1964), 44, T52-T54

See also


Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • Mean value theorem (divided differences) — In mathematical analysis, the mean value theorem for divided differences generalizes the mean value theorem to higher derivatives.[1] It states that for any n + 1 points x0, ..., xn in the domain of an n times differentiable… …   Wikipedia

  • Differences between Norwegian Bokmål and Standard Danish — Danish and Norwegian Bokmål (the most common standard form of written Norwegian) are very similar languages, but differences between them do exist. The languages are mutually intelligible, with the primary differences being in pronunciation and… …   Wikipedia

  • American and British English pronunciation differences — Differences in pronunciation between American English (AmE) and British English (BrE) can be divided into: * differences in accent (i.e. phoneme inventory and realisation ). Accents vary widely within AmE and within BrE, so the features… …   Wikipedia

  • American and British English differences — For the Wikipedia editing policy on use of regional variants in Wikipedia, see Wikipedia:Manual of style#National varieties of English. This is one of a series of articles about the differences between British English and American English, which …   Wikipedia

  • Christian eschatological differences — This is a general overview of the different eschatological interpretations of the Book of Revelation held by Christians. The differences are by no means monolithic as representing one group or another. Many differences exist within each… …   Wikipedia

  • North–South differences in the Korean language — There are a small number of differences in the standard forms of the Korean language used in the Democratic People s Republic of Korea (North Korea; hereafter the North ) and the Republic of Korea (South Korea; hereafter the South ), due to the… …   Wikipedia

  • Korean language North-South differences — The North South differences in the Korean language refers to the differences in the Korean language used in the Democratic People s Republic of Korea (North Korea; hereafter the North ) and the Republic of Korea (South Korea; the South. ) From a… …   Wikipedia

  • List of lexical differences in South African English — This is a list of words used in mainstream South African English but not usually found in other other dialects of the English language. (For a list of slang words unique to South Africa see List of South African slang words.) List A B* bakkie a… …   Wikipedia

  • Newton polynomial — In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is the interpolation polynomial for a given set of data points in the Newton form. The Newton polynomial is sometimes called Newton s… …   Wikipedia

  • Finite difference — A finite difference is a mathematical expression of the form f(x + b) − f(x + a). If a finite difference is divided by b − a, one gets a difference quotient. The approximation of derivatives by finite differences… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”