 Computational complexity of mathematical operations

The following tables list the running time of various algorithms for common mathematical operations.
Here, complexity refers to the time complexity of performing computations on a multitape Turing machine.^{[1]} See big O notation for an explanation of the notation used.
Note: Due to the variety of multiplication algorithms, M(n) below stands in for the complexity of the chosen multiplication algorithm.
Contents
Arithmetic functions
Operation Input Output Algorithm Complexity Addition Two ndigit numbers One n+1digit number Schoolbook addition with carry Θ(n) Subtraction Two ndigit numbers One n+1digit number Schoolbook subtraction with borrow Θ(n) Multiplication Two ndigit numbers One 2ndigit number Schoolbook long multiplication O(n^{2}) Karatsuba algorithm O(n^{1.585}) 3way Toom–Cook multiplication O(n^{1.465}) kway Toom–Cook multiplication O(n^{log (2k − 1)/log k}) Mixedlevel Toom–Cook (Knuth 4.3.3T)^{[2]} O(n 2^{√(2 log n)} log n) Schönhage–Strassen algorithm O(n log n log log n) Fürer's algorithm^{[3]} O(n log n 2^{log* n}) Division Two ndigit numbers One ndigit number Schoolbook long division O(n^{2}) Newton's method M(n) Square root One ndigit number One ndigit number Newton's method M(n) Modular exponentiation Two ndigit numbers and a kbit exponent One ndigit number Repeated multiplication and reduction O(2^{k}M(n)) Exponentiation by squaring O(k M(n)) Exponentiation with Montgomery reduction O(k M(n)) Schnorr and Stumpf^{[4]} conjectured that no fastest algorithm for multiplication exists.
Algebraic functions
Operation Input Output Algorithm Complexity Polynomial evaluation One polynomial of degree n with fixedsize polynomial coefficients One fixedsize number Direct evaluation Θ(n) Horner's method Θ(n) Polynomial gcd (over Z[x] or F[x]) Two polynomials of degree n with fixedsize polynomial coefficients One polynomial of degree at most n Euclidean algorithm O(n^{2}) Fast Euclidean algorithm ^{[5]} O(n (log n)^{2} log log n) Special functions
Many of the methods in this section are given in Borwein & Borwein.^{[6]}
Elementary functions
The elementary functions are constructed by composing arithmetic operations, the exponential function (exp), the natural logarithm (log), trigonometric functions (sin, cos), and their inverses. The complexity of an elementary function is equivalent to that of its inverse, since all elementary functions are analytic and hence invertible by means of Newton's method. In particular, if either exp or log can be computed with some complexity, then that complexity is attainable for all other elementary functions.
Below, the size n refers to the number of digits of precision at which the function is to be evaluated.
Algorithm Applicability Complexity Taylor series; repeated argument reduction (e.g. exp(2x) = [exp(x)]^{2}) and direct summation exp, log, sin, cos O(n^{1/2} M(n)) Taylor series; FFTbased acceleration exp, log, sin, cos O(n^{1/3} (log n)^{2} M(n)) Taylor series; binary splitting + bit burst method^{[7]} exp, log, sin, cos O((log n)^{2} M(n)) Arithmeticgeometric mean iteration log O(log n M(n)) It is not known whether O(log n M(n)) is the optimal complexity for elementary functions. The best known lower bound is the trivial bound Ω(M(n)).
Nonelementary functions
Function Input Algorithm Complexity Gamma function ndigit number Series approximation of the incomplete gamma function O(n^{1/2} (log n)^{2} M(n)) Fixed rational number Hypergeometric series O((log n)^{2} M(n)) m/24, m an integer Arithmeticgeometric mean iteration O(log n M(n)) Hypergeometric function _{p}F_{q} ndigit number (As described in Borwein & Borwein) O(n^{1/2} (log n)^{2} M(n)) Fixed rational number Hypergeometric series O((log n)^{2} M(n)) Mathematical constants
This table gives the complexity of computing approximations to the given constants to n correct digits.
Constant Algorithm Complexity Golden ratio, φ Newton's method O(M(n)) Square root of 2, √2 Newton's method O(M(n)) Euler's number, e Binary splitting of the Taylor series for the exponential function O(log n M(n)) Newton inversion of the natural logarithm O(log n M(n)) Pi, π Binary splitting of the arctan series in Machin's formula O((log n)^{2} M(n)) Salamin–Brent algorithm O(log n M(n)) Euler's constant, γ Sweeney's method (approximation in terms of the exponential integral) O((log n)^{2} M(n)) Number theory
Algorithms for number theoretical calculations are studied in computational number theory.
Operation Input Output Algorithm Complexity Greatest common divisor Two ndigit numbers One number with at most n digits Euclidean algorithm O(n^{2}) Binary GCD algorithm O(n^{2}) Left/Right kary Binary GCD algorithm^{[8]} O(n^{2} / log n) StehléZimmermann algorithm^{[9]} O(log n M(n)) Schönhage controlled Euclidean descent algorithm^{[10]} O(log n M(n)) Jacobi symbol Two ndigit numbers 0, 1, or 1 Schönhage controlled Euclidean descent algorithm^{[11]} O(log n M(n)) StehléZimmermann algorithm^{[12]} O(log n M(n)) Factorial A fixedsize number m One O(m log m)digit number Bottomup multiplication O(m^{2} log m) Binary splitting O(log m M(m log m)) Exponentiation of the prime factors of m O(log log m M(m log m)),^{[13]}
O(M(m log m))^{[1]}Matrix algebra
The following complexity figures assume that arithmetic with individual elements has complexity O(1), as is the case with fixedprecision floatingpoint arithmetic.
Operation Input Output Algorithm Complexity Matrix multiplication Two n×nmatrices One n×nmatrix Schoolbook matrix multiplication O(n^{3}) Strassen algorithm O(n^{2.807}) Coppersmith–Winograd algorithm O(n^{2.376}) Matrix multiplication One n×mmatrix & One m×pmatrix
One n×pmatrix Schoolbook matrix multiplication O(nmp) Matrix inversion One n×nmatrix One n×nmatrix Gauss–Jordan elimination O(n^{3}) Strassen algorithm O(n^{2.807}) Coppersmith–Winograd algorithm O(n^{2.376}) Determinant One n×nmatrix One number with at most O(n log n) bits Laplace expansion O(n!) LU decomposition O(n^{3}) Bareiss algorithm O(n^{3}) Fast matrix multiplication O(n^{2.376}) Back Substitution Triangular matrix n solutions Back substitution O(n^{2})^{[14]} In 2005, Henry Cohn, Robert Kleinberg, Balázs Szegedy and Christopher Umans showed that either of two different conjectures would imply that the exponent of matrix multiplication is 2.^{[15]} It has also been conjectured that no fastest algorithm for matrix multiplication exists, in light of the nearly 20 successive improvements leading to the Coppersmith–Winograd algorithm.
References
 ^ ^{a} ^{b} A. Schönhage, A.F.W. Grotefeld, E. Vetter: Fast Algorithms—A Multitape Turing Machine Implementation, BI WissenschaftsVerlag, Mannheim, 1994
 ^ D. Knuth. The Art of Computer Programming, Volume 2. Third Edition, AddisonWesley 1997.
 ^ Martin Fürer. Faster Integer Multiplication. Proceedings of the 39th Annual ACM Symposium on Theory of Computing, San Diego, California, USA, June 11–13, 2007, pp. 55–67.
 ^ C. P. Schnorr and G. Stumpf. A characterization of complexity sequences. Zeitschrift fur Mathematische Logik und Grundlagen der Mathematik 21(1):47–56, 1975.
 ^ http://planetmath.org/encyclopedia/HalfGCDAlgorithm.html
 ^ J. Borwein & P. Borwein. Pi and the AGM: A Study in Analytic Number Theory and Computational Complexity. John Wiley 1987.
 ^ David and Gregory Chudnovsky. Approximations and complex multiplication according to Ramanujan. Ramanujan revisited, Academic Press, 1988, pp 375–472.
 ^ J. Sorenson. (1994). "Two Fast GCD Algorithms". Journal of Algorithms 16 (1): 110–144. doi:10.1006/jagm.1994.1006.
 ^ R. Crandall & C. Pomerance. Prime Numbers  A Computational Perspective. Second Edition, Springer 2005.
 ^ Möller N (2008). "On Schönhage's algorithm and subquadratic integer gcd computation". Mathematics of Computation 77 (261): 589–607. doi:10.1090/S0025571807020170. http://www.lysator.liu.se/~nisse/archive/sgcd.pdf.
 ^ Bernstein D J. "Faster Algorithms to Find Nonsquares Modulo Worstcase Integers". http://cr.yp.to/papers/nonsquare.ps.
 ^ Richard P. Brent; Paul Zimmermann (2010). "An O(M(n) log n) algorithm for the Jacobi symbol". arXiv:1004.2091.
 ^ P. Borwein. "On the complexity of calculating factorials". Journal of Algorithms 6, 376380 (1985)
 ^ J. B. Fraleigh and R. A. Beauregard, "Linear Algebra," AddisonWesley Publishing Company, 1987, p 95.
 ^ Henry Cohn, Robert Kleinberg, Balazs Szegedy, and Chris Umans. Grouptheoretic Algorithms for Matrix Multiplication. arXiv:math.GR/0511460. Proceedings of the 46th Annual Symposium on Foundations of Computer Science, 23–25 October 2005, Pittsburgh, PA, IEEE Computer Society, pp. 379–388.
Categories: Arbitrary precision algorithms
 Computational complexity theory
 Mathematicsrelated lists
 Number theoretic algorithms
 Unsolved problems in computer science
Wikimedia Foundation. 2010.