Graeffe's method

Graeffe's method

In mathematics, Graeffe's method or Dandelin–Graeffe method is an algorithm for finding all of the roots of a polynomial. It was developed independently by Germinal Pierre Dandelin in 1826 and Karl Heinrich Gräffe in 1837. Lobachevsky in 1834 also discovered the principal idea of the method.[1] The method separates the roots of a polynomial by squaring them repeatedly. This squaring of the roots is done implicitly, that is, only working on the coefficients of the polynomial. Finally, Viète's formulas are used in order to approximate the roots.

Contents

Dandelin–Graeffe iteration

Let p(x) be an nth degree polynomial.

p(x) = (x-x_1)(x-x_2)\cdots(x-x_n) \,

Then

p(-x) = (-1)^n\, (x+x_1)(x+x_2)\cdots(x+x_n). \,

Let q(x) be the polynomial which has the squares x_1^2,\,x_2^2,\dots,\, x_n^2 as its roots,

q(x)=(x-x_1^2)(x-x_2^2)\cdots(x-x_n^2). \,

Hence by the binomial identity x^2-x_k^2=(x-x_k)(x+x_k)

q(x^2) = (x^2-x_1^2)(x^2-x_2^2)\cdots(x^2-x_n^2)=(-1)^n\,p(x)p(-x). \,

The polynomial q(x) can now be computed by algebraic operations on the coefficients of the polynomial p(x) alone. Write

p(x)=x^n+a_1x^{n-1}+\cdots+a_{n-1}x+a_n \,

and

q(x)=x^n+b_1x^{n-1}+\cdots+b_{n-1}x+b_n, \,

then the coefficients are related by

b_k=(-1)^k\,a_k^2+2\sum_{j=0}^{k-1}(-1)^j\,a_ja_{2k-j},\text{ with }a_0=b_0=1. \,

Graeffe observed that one obtains a simplified algebraic expression for q(x) when separating p(x) into its odd and even parts,

p(x)=p_e(x^2)+x\,p_o(x^2)\;\implies\; q(x)=(-1)^n(p_e(x)^2-x\,p_o(x)^2). \,

This expression involves the squaring of two polynomials of only half the degree, and is therefore used in most implementations of the method.

Iterating this procedure several times separates the roots with respect to their magnitudes. Repeating k times gives a polynomial

q^k(y) = y^n + {a^k}_1\,y^{n-1} + \cdots + {a^k}_{n-1}\,y + {a^k}_n \,

of degree n with roots y_1=x_1^{2^k},\,y_2=x_2^{2^k},\,\dots,\,y_n=x_n^{2^k}. If the magnitudes of the roots of the original polynomial were separated by some factor ρ > 1, that is, |x_k|\ge\rho\,|x_{k+1}|, then the roots of the k-th iterate are separated by a fast growing factor \rho^{2^k}\ge 1+2^k(\rho-1).

Classical Graeffe's method

Next the Vieta relations are used

\begin{align}
a^k_{\;1} &= -(y_1+y_2+\cdots+y_n)\\
a^k_{\;2} &= y_1 y_2 + y_1 y_3+\cdots+y_{n-1} y_n\\
        &\;\vdots\\
a^k_{\;n} &= (-1)^n(y_1 y_2 \cdots y_n).
\end{align}

If the roots x_1,\dots,x_n are sufficiently separated, say by a factor ρ > 1, |x_m|\ge \rho|x_{m+1}|, then the iterated powers y1,y2,...,yn of the roots are separated by the factor \rho^{2^k}, which quickly becomes very big.

The coefficients of the iterated polynomial can then be approximated by their leading term,

a^k_{\;1} \approx -y_1
a^k_{\;2} \approx y_1 y_2 and so on,

implying


  y_1\approx -a^k_{\;1},\; 
  y_2\approx -a^k_{\;2}/a^k_{\;1},
    \;\dots\;
  y_n\approx -a^k_{\;n}/a^k_{\;n-1}.

Finally, logarithms are used in order to find the absolute values of the roots of the original polynomial. These magnitudes alone are already useful to generate meaningful starting points for other root-finding methods.

To also obtain the angle of these roots, a multitude of methods has been proposed, the most simple one being to successively compute the square root of a (possibly complex) root of qm(y), m ranging from k to 1, and testing which of the two sign variants is a root of qm − 1(x). Before continuing to the roots of qm − 2(x), it might be necessary to numerically improve the accuracy of the root approximations for qm − 1(x), for instance by Newton's method.

Graeffe's method works best for polynomials with simple real roots, though it can be adapted for polynomials with complex roots and coefficients, and roots with higher multiplicity. For instance, it has been observed[2] that for a root x_{\ell+1}=x_{\ell+2}=\dots=x_{\ell+d} with multiplicity d, the fractions

\left|\frac{(a^{m-1}_{\;\ell+i})^2}{a^{m}_{\;\ell+i}}\right| tend to \binom{d}{i}

for i=0,1,\dots,d. This allows to estimate the multiplicity structure of the set of roots.

From a numerical point of view, this method is problematic since the coefficients of the iterated polynomials span very quickly many orders of magnitude, which implies serious numerical errors. One second, but minor concern is that many different polynomials lead to the same Graeffe iterates.

Tangential Graeffe method

This method replaces the numbers by truncated power series of degree 1. Symbolically, this is achieved by introducing an „algebraic infinitesimal“ ε with the defining propertiy ε2 = 0. Then the polynomial p(x+\varepsilon)=p(x)+\varepsilon\,p'(x) has roots xm − ε, with powers

(x_m-\varepsilon)^{2^k}=x_m^{2^k}-\varepsilon\,{2^k}\,x_m^{2^k-1}=y_m+\varepsilon\,\dot y_m.

Thus the value of xm is easily obtained as fraction x_m=-\tfrac{2^k\,y_m}{\dot y_m}.

This kind of computation with infinitesimals is easy to implement analoguous to the computation with complex numbers. If one assumes complex coordinates or an initial shift by some randomly chosen complex number, then all roots of the polynomial will be distinct and consequently recoverable with the iteration.

Renormalization

Every polynomial can be scaled in domain and range such that in the resulting polynomial the first and the last coefficient have size one. If the size of the inner coefficients is bounded by M, then the size of the inner coefficients after one stage of the Graeffe iteration is bounded by nM2. After k stages one gets the bound n^{2^k-1}M^{2^k} for the inner coefficients.

To overcome the limit posed by the growth of the powers, Malajovich–Zubelli propose to represent coefficients and intermediate results in the kth stage of the algorithm by a scaled polar form

c=\alpha\,e^{-2^k\,r}

where \alpha=\frac{c}{|c|} is a complex number of unit length and r = − 2 klog  | c | is a positive real. Splitting off the power 2k in the exponent reduces the absolute value of c to the corresponding dyadic root. Since this preserves the magnitude of the (representation of the) initial coefficients, this process was named renormalization.

Multiplication of two numbers of this type is straightforward, whereas addition is performed following the factorization c_3=c_1+c_2=|c_1|\cdot\left(\alpha_1+\alpha_2\tfrac{|c_2|}{|c_1|}\right), where c1 is chosen as the larger of both numbers, that is, r1 < r2. Thus

\alpha_3=\tfrac{s}{|s|} and r_3=r_1+2^{-k}\,\log{|s|} with s=\alpha_1+\alpha_2\,e^{2^k(r_1-r_2)}.

The coefficients a_0,a_1,\dots,a_n of the final stage k of the Graeffe iteration, for some reasonably large value of k, are represented by pairs m,rm), m=0,\dots,n. By identifying the corners of the convex envelope of the point set \{(m,r_m):\;m=0,\dots,n\} one can determine the multiplicities of the roots of the polynomial. Combining this renormalization with the tangent iteration one can extract directly from the coefficients at the corners of the envelope the roots of the original polynomial.

See also

References

  1. ^ Alston Scott Householder: Dandelin, Lobačevskiǐ, or Graeffe?, Amer. Math. Monthly, 66 (1959), pp. 464–466 (on JSTOR)
  2. ^ G. C. Best: Notes on the Graeffe method of root squaring, Amer. Math. Monthly, 56, (1949) (on JSTOR)

Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Graeffe method — /gref euh, graf euh/, Math. a method, involving the squaring of roots, for approximating the solutions to algebraic equations. * * * …   Universalium

  • Graeffe method — /gref euh, graf euh/, Math. a method, involving the squaring of roots, for approximating the solutions to algebraic equations …   Useful english dictionary

  • Splitting circle method — In mathematics, the splitting circle method is a numerical algorithm for the numerical factorization of a polynomial and, ultimately, for finding its complex roots. It was introduced by Arnold Schönhage in his 1982 paper The fundamental theorem… …   Wikipedia

  • Théorie des équations (mathématiques) — Pour les articles homonymes, voir Théorie des équations. La théorie des équations est la partie des mathématiques qui traite des problèmes posés par les équations polynomiales de tous les degrés. Se trouvent ainsi rassemblés les problèmes de… …   Wikipédia en Français

  • List of numerical analysis topics — This is a list of numerical analysis topics, by Wikipedia page. Contents 1 General 2 Error 3 Elementary and special functions 4 Numerical linear algebra …   Wikipedia

  • Root-finding algorithm — A root finding algorithm is a numerical method, or algorithm, for finding a value x such that f(x) = 0, for a given function f. Such an x is called a root of the function f. This article is concerned with finding scalar, real or complex roots,… …   Wikipedia

  • List of mathematics articles (G) — NOTOC G G₂ G delta space G networks Gδ set G structure G test G127 G2 manifold G2 structure Gabor atom Gabor filter Gabor transform Gabor Wigner transform Gabow s algorithm Gabriel graph Gabriel s Horn Gain graph Gain group Galerkin method… …   Wikipedia

  • Algorithme de recherche d'un zéro d'une fonction — Un algorithme de recherche d un zéro d’une fonction est une méthode numérique ou un algorithme de recherche d’une valeur approchée d’un x vérifiant f(x) = 0, pour une fonction donnée f. Ici, x est un nombre réel appelé zéro de f ou lorsque f est… …   Wikipédia en Français

  • Germinal Pierre Dandelin — Nacimiento 12 de abril de 1794 París, Francia …   Wikipedia Español

  • Properties of polynomial roots — In mathematics, a polynomial is a function of the form: p(x) = a 0 + a 1 x + cdots a n x^n, quad xin mathbb{C}where the coefficients a 0, ldots, a n are complex numbers. The fundamental theorem of algebrastates that polynomial p has n roots. The… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”