- Generating trigonometric tables
In
mathematics , tables oftrigonometric function s are useful in a number of areas. Before the existence ofpocket calculator s, trigonometric tables were essential fornavigation ,science andengineering . The calculation ofmathematical table s was an important area of study, which led to the development of the first mechanical computing devices.Modern computers and pocket calculators now generate trigonometric function values on demand, using special libraries of mathematical code. Often, these libraries use pre-calculated tables internally, and compute the required value by using an appropriate interpolation method.
Interpolation of simple look-up tables of trigonometric functions are still used in
computer graphics , where accurate calculations are either not needed, or cannot be made fast enough.Another important application of trigonometric tables and generation schemes is for
fast Fourier transform (FFT) algorithms, where the same trigonometric function values (called "twiddle factors") must be evaluated many times in a given transform, especially in the common case where many transforms of the same size are computed. In this case, calling generic library routines every time is unacceptably slow. One option is to call the library routines once, to build up a table of those trigonometric values that will be needed, but this requires significant memory to store the table. The other possibility, since a regular sequence of values is required, is to use a recurrence formula to compute the trigonometric values on the fly. Significant research has been devoted to finding accurate, stable recurrence schemes in order to preserve the accuracy of the FFT (which is very sensitive to trigonometric errors).On-demand computation
Modern computers and calculators use a variety of techniques to provide trigonometric function values on demand for arbitrary angles (Kantabutra, 1996). One common method, especially on higher-end processors with
floating point units, is to combine apolynomial or rational approximation (such asChebyshev approximation , best uniform approximation, and Padé approximation, and typically for higher or variable precisions, Taylor andLaurent series ) with range reduction and a table lookup — they first look up the closest angle in a small table, and then use the polynomial to compute the correction. Maintaining precision while performing such interpolation is nontrivial, however; and methods like Gal's accurate tables, Cody and Waite reduction, and Payne and Hanek reduction algorithms can be used for this purpose. On simpler devices that lack a hardware multiplier, there is an algorithm calledCORDIC (as well as related techniques) that is more efficient, since it uses only shifts and additions. All of these methods are commonly implemented in hardware for performance reasons.For very high precision calculations, when series-expansion convergence becomes too slow, trigonometric functions can be approximated by the
arithmetic-geometric mean , which itself approximates the trigonometric function by the (complex)elliptic integral (Brent, 1976).Trigonometric functions of angles that are rational multiples of 2π are
algebraic number s, related toroots of unity , and can be computed with apolynomial root-finding algorithm in thecomplex plane . For example, the cosine and sine of 2π⋅5/37 are the real andimaginary part s, respectively, of a 37th root of unity, corresponding to a root of a degree-37 polynomial x^{37} - 1. Root-finding algorithms such asNewton's method are much simpler than the arithmetic-geometric mean algorithms above while converging at a similar asymptotic rate; the latter algorithms are required for transcendental trigonometric constants, however.Half-angle and angle-addition formulas
Historically, the earliest method by which trigonometric tables were computed, and probably the most common until the advent of computers, was to repeatedly apply the half-angle and angle-addition trigonometric identities starting from a known value (such as sin(π/2)=1, cos(π/2)=0). The relevant identities, the first recorded derivation of which is by
Ptolemy , are (with signs determined by the quadrant of "x")::cosleft(frac{x}{2} ight) = pm, sqrt{frac{1 + cos(x)}{2
:sinleft(frac{x}{2} ight) = pm, sqrt{frac{1 - cos(x)}{2
:sin(x pm y) = sin(x) cos(y) pm cos(x) sin(y),
:cos(x pm y) = cos(x) cos(y) mp sin(x) sin(y),
Various other permutations on these identities are possible (for example, the earliest trigonometric tables used not sine and cosine, but sine and
versine ).A quick, but inaccurate, approximation
A quick, but inaccurate, algorithm for calculating a table of "N" approximations "s""n" for sin(2π"n"/"N") and "c""n" for cos(2π"n"/"N") is:
:"s"0 = 0:"c"0 = 1:"s""n"+1 = "s""n" + "d" × "c""n":"c""n"+1 = "c""n" − "d" × "s""n"for "n" = 0,...,"N" − 1, where "d" = 2π/"N".
This is simply the Euler method for integrating the
differential equation ::ds/dt = c:dc/dt = -s
with initial conditions "s"(0) = 0 and "c"(0) = 1, whose analytical solution is "s" = sin("t") and "c" = cos("t").
Unfortunately, this is not a useful algorithm for generating sine tables because it has a significant error, proportional to 1/"N".
For example, for "N" = 256 the maximum error in the sine values is ~0.061 ("s"202 = −1.0368 instead of −0.9757). For "N" = 1024, the maximum error in the sine values is ~0.015 ("s"803 = −0.99321 instead of −0.97832), about 4 times smaller. If the sine and cosine values obtained were to be plotted, this algorithm would draw a logarithmic spiral rather than a circle.
A better, but still imperfect, recurrence formula
A simple recurrence formula to generate trigonometric tables is based on
Euler's formula and the relation::e^{i( heta + Delta heta)} = e^{i heta} imes e^{iDelta heta}
This leads to the following recurrence to compute trigonometric values "s""n" and "c""n" as above:
:"c"0 = 1:"s"0 = 0:"c""n"+1 = "w""r" "c""n" − "w""i" "s""n":"s""n"+1 = "w""i" "c""n" + "w""r" "s""n"for "n" = 0, ..., "N" − 1, where "w""r" = cos(2π/"N") and "w""i" = sin(2π/"N"). These two starting trigonometric values are usually computed using existing library functions (but could also be found e.g. by employing
Newton's method in the complex plane to solve for the primitive root of "z""N" − 1).This method would produce an "exact" table in exact arithmetic, but has errors in finite-precision
floating-point arithmetic. In fact, the errors grow as O(ε "N") (in both the worst and average cases), where ε is the floating-point precision.A significant improvement is to use the following modification to the above, a trick (due to Singleton) often used to generate trigonometric values for FFT implementations:
:"c"0 = 1:"s"0 = 0:"c""n"+1 = "c""n" − (α"c""n" + β "s""n"):"s""n"+1 = "s""n" + (β "c""n" − α "s""n")
where α = 2 sin²(π/"N") and β = sin(2π/"N"). The errors of this method are much smaller, O(ε √"N") on average and O(ε "N") in the worst case, but this is still large enough to substantially degrade the accuracy of FFTs of large sizes.
ee also
*
Numerical analysis
*CORDIC
*Exact trigonometric constants References
* Carl B. Boyer, "A History of Mathematics", 2nd ed. (Wiley, New York, 1991).
* Manfred Tasche and Hansmartin Zeuner, "Improved roundoff error analysis for precomputed twiddle factors," "J. Computational Analysis and Applications" 4 (1), 1-18 (2002).
* James C. Schatzman, "Accuracy of the discrete Fourier transform and the fast Fourier transform," "SIAM J. Sci. Comput." 17 (5), 1150-1166 (1996).
* Vitit Kantabutra, "On hardware for computing exponential and trigonometric functions," "IEEE Trans. Computers" 45 (3), 328–339 (1996).
* R. P. Brent, " [http://doi.acm.org/10.1145/321941.321944 Fast Multiple-Precision Evaluation of Elementary Functions] ", "J. ACM" 23, 242-251 (1976).
Wikimedia Foundation. 2010.