Remez algorithm

Remez algorithm

The Remez algorithm (sometimes also called Remes algorithm, Remez/Remes exchange algorithm), published by Evgeny Yakovlevich Remez in 1934 [E. Ya. Remez, "Sur la détermination des polynômes d'approximation de degré donnée", Comm. Soc. Math. Kharkov 10, 41 (1934);
"Sur un procédé convergent d'approximations successives pour déterminer les polynômes d'approximation, Compt. Rend. Acad. Sc. 198, 2063 (1934);
"Sur le calcul effectiv des polynômes d'approximation des Tschebyscheff", Compt. Rend. Acade. Sc. 199, 337 (1934).
] is an iterative algorithm for finding the best approximation in the uniform norm "L" in the Chebyshev space. A typical example of a Chebyshev space is the subspace of polynomials of order "n" in the space of real continuous functions on an interval, "C" ["a", "b"] .

The polynomial of best approximation of a given degree is defined to be the one that minimizes the maximum absolute difference between the polynomial and the function.

Procedure

The Remez algorithm starts with a set of n + 2 sample points X in the approximation interval, usually the Chebyshev nodes linearly mapped to the interval.

# Set up a system of equations where each equation is of the form b_0 + b_1 x ... b_n x ^ n + (-1) ^ i E = f(x_i) for some x_i in X, and solve the system for the unknowns (b_0, b_1...b_n, E).
# Use the b_is to create a polynomial P_n.
# Find the points of local maximum error (M) given by |P_n(x) - f(x)|.
# If every m in M is of equal magnitude and alternates, P_n is the minimax approximation polynomial. If not, replace X with M and start over.

The result is called the polynomial of best approximation, the Chebyshev approximation, or the minimax approximation.

A review of technicalities in implementing the Remez algorithm is given by W. Fraser [ [http://doi.acm.org/10.1145/321281.321282 W. Fraser, "A Survey of Methods of Computing Minimax and Near-Minimax Polynomial Approximations for Functions of a Single Independent Variable", J. ACM 12, 295 (1965).] ] .

On the choice of initialization

The reason for the Chebyshev nodes being a common choice for the initial approximation is in its behavior in the theory of polynomial interpolation.

For the initialization of the optimization problem for function "f" by the Lagrange interpolant "L"n("f"), it can be shown that this initial approximation is bounded by

:lVert f - L_n(f) Vert_infty le (1 + lVert L_n Vert_infty) inf_{p in P_n} lVert f - p Vert

with the norm or Lebesgue constant of the Lagrange interpolation operator "L""n" of the nodes ("t"1, ..., "t""n" + 1) being

:lVert L_n Vert_infty = overline{Lambda}_n(T) = max_{-1 le x le 1} lambda_n(T; x),

"T" being the zeros of the Chebyshev polynomials, and the Lebesgue functions being

:lambda_n(T; x) = sum_{j = 1}^{n + 1} left| l_j(x) ight|, quad l_j(x) = prod_{stackrel{i = 1}{i e j^{n + 1} frac{(x - t_i)}{(t_j - t_i)}

Theodore A. Kilgore [ [http://dx.doi.org/10.1016/0021-9045(78)90013-8 T. A. Kilgore, "A characterization of the Lagrange interpolating projection with minimal Tchebycheff norm", J. Approx. Theory 24, 273 (1978).] ] , Carl de Boor, and Allan Pinkus [ [http://dx.doi.org/10.1016/0021-9045(78)90014-X C. de Boor and A. Pinkus, "Proof of the conjectures of Bernstein and Erdös concerning the optimal nodes for polynomial interpolation", J. Approx. Theory 24, 289 (1978).] ] proved that there exists a unique "t""i" for each "L""n", although not known explicitly for (ordinary) polynomials. Similarly, underline{Lambda}_n(T) = min_{-1 le x le 1} lambda_n(T; x), and the optimality of a choice of nodes can be expressed as overline{Lambda}_n - underline{Lambda}_n ge 0.

For Chebyshev nodes, which provides a suboptimal, but analytically explicit choice, the asymptotic behavior is known as [F. W. Luttmann and T. J. Rivlin, "Some numerical experiments in the theory of polynomial interpolation", IBM J. Res. Develop. 9, 187 (1965).]

:overline{Lambda}_n(T) = frac{2}{pi} log(n + 1) + frac{2}{pi}left(gamma + logfrac{8}{pi} ight) + alpha_{n + 1}

("γ" being the Euler-Mascheroni constant) with

:0 < alpha_n < frac{pi}{72 n^2} for n ge 1,

and upper bound [T. Rivlin, "The Lebesgue constants for polynomial interpolation", in "Proceedings of the Int. Conf. on Functional Analysis and Its Application", edited by H. G. Garnier "et al." (Springer-Verlag, Berlin, 1974), p. 422; "The Chebyshev polynomials" (Wiley-Interscience, New York, 1974).]

:overline{Lambda}_n(T) le frac{2}{pi} log(n + 1) + 1

Lev Brutman [ [http://dx.doi.org/10.1137/0715046 L. Brutman, "On the Lebesgue Function for Polynomial Interpolation", SIAM J. Numer. Anal. 15, 694 (1978).] ] obtained the bound for n ge 3, and hat{T} being the zeros of the expanded Chebyshev polynomials:

:overline{Lambda}_n(hat{T}) - underline{Lambda}_n(hat{T}) < overline{Lambda}_3 - frac{1}{6} cot frac{pi}{8} + frac{pi}{64} frac{1}{sin^2(3pi/16)} - frac{2}{pi}(gamma - logpi)approx 0.201.

Rüdiger Günttner [ [http://dx.doi.org/10.1137/0717043 R. Günttner, "Evaluation of Lebesgue Constants", SIAM J. Numer. Anal. 17, 512 (1980).] ] obtained from a sharper estimate for n ge 40

:overline{Lambda}_n(hat{T}) - underline{Lambda}_n(hat{T}) < 0.0196.

Variants

Sometimes more than one sample point is replaced at the same time with the locations of nearby maximum absolute differences.

Sometimes relative error is used to measure the difference between the approximation and the function, especially if the approximation will be used to compute the function on a computer which uses floating-point arithmetic.

References

ee also

* Approximation theory

External links

* [http://www.bores.com/courses/intro/filters/4_equi.htm Intro to DSP]
* [http://mathworld.wolfram.com/RemezAlgorithm.html Remez Algorithm from MathWorld]


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Evgeny Yakovlevich Remez — (sometimes spelled as Evgenii Yakovlevich Remez , ru. Евгений Яковлевич Ремез) (1896 1975) was a Soviet mathematician. He is known for Remez algorithm.External links* [http://www.genealogy.math.ndsu.nodak.edu/html/id.phtml?id=76952 Remez at… …   Wikipedia

  • Parks-McClellan filter design algorithm — The Parks McClellan filter design algorithm is a digital signal processing algorithm developed by James H. McClellan (now at Georgia Tech) and Thomas W. Parks (now at Cornell University) while McClellan was a graduate student working with Parks… …   Wikipedia

  • Minimax approximation algorithm — Polynomial expansions such as the Taylor series expansion are often convenient for theoretical work but less useful for practical applications. For practical work it is often desirable to minimize the maximum absolute or relative error of a… …   Wikipedia

  • Approximation theory — In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with quantitatively characterizing the errors introduced thereby. Note that what is meant by best and simpler will depend on …   Wikipedia

  • List of mathematics articles (R) — NOTOC R R. A. Fisher Lectureship Rabdology Rabin automaton Rabin signature algorithm Rabinovich Fabrikant equations Rabinowitsch trick Racah polynomials Racah W coefficient Racetrack (game) Racks and quandles Radar chart Rademacher complexity… …   Wikipedia

  • List of numerical analysis topics — This is a list of numerical analysis topics, by Wikipedia page. Contents 1 General 2 Error 3 Elementary and special functions 4 Numerical linear algebra …   Wikipedia

  • Sample rate conversion — is the process of converting a (usually digital) signal from one sampling rate to another, while changing the information carried by the signal as little as possible. When applied to an image, this process is sometimes called image scaling.Sample …   Wikipedia

  • Filter design — is the process of designing a filter (in the sense in which the term is used in signal processing, statistics, and applied mathematics), often a linear shift invariant filter, which satisfies a set of requirements, some of which are contradictory …   Wikipedia

  • COMPUTER SCIENCE — The term Computer Science encompasses three different types of research areas: computability, efficiency, and methodology. General Introduction Computability deals with the question of what is mechanically computable. The most natural way to… …   Encyclopedia of Judaism

  • Finite impulse response — A finite impulse response (FIR) filter is a type of a digital filter. The impulse response, the filter s response to a Kronecker delta input, is finite because it settles to zero in a finite number of sample intervals. This is in contrast to… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”