 Big O notation

In mathematics, big O notation is used to describe the limiting behavior of a function when the argument tends towards a particular value or infinity, usually in terms of simpler functions. It is a member of a larger family of notations that is called Landau notation, BachmannLandau notation, or asymptotic notation. In computer science, big O notation is used to classify algorithms by how they respond (e.g., in their processing time or working space requirements) to changes in input size.
Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation. A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function. Associated with big O notation are several related notations, using the symbols o, Ω, ω, and Θ, to describe other kinds of bounds on asymptotic growth rates.
Big O notation is also used in many other fields to provide similar estimates.
Contents
Formal definition
Let f(x) and g(x) be two functions defined on some subset of the real numbers. One writes
if and only if, there is a sufficiently large constant M such that for all sufficiently large values of x, f(x) is at most M multiplied by g(x) in absolute value. That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x_{0} such that
In many contexts, the assumption that we are interested in the growth rate as the variable x goes to infinity is left unstated, and one writes more simply that f(x) = O(g(x)). The notation can also be used to describe the behavior of f near some real number a (often, a = 0): we say
if and only if there exist positive numbers δ and M such that
If g(x) is nonzero for values of x sufficiently close to a, both of these definitions can be unified using the limit superior:
if and only if
Example
In typical usage, the formal definition of O notation is not used directly; rather, the O notation for a function f(x) is derived by the following simplification rules:
 If f(x) is a sum of several terms, the one with the largest growth rate is kept, and all others omitted.
 If f(x) is a product of several factors, any constants (terms in the product that do not depend on x) are omitted.
For example, let f(x) = 6x^{4} − 2x^{3} + 5, and suppose we wish to simplify this function, using O notation, to describe its growth rate as x approaches infinity. This function is the sum of three terms: 6x^{4}, −2x^{3}, and 5. Of these three terms, the one with the highest growth rate is the one with the largest exponent as a function of x, namely 6x^{4}. Now one may apply the second rule: 6x^{4} is a product of 6 and x^{4} in which the first factor does not depend on x. Omitting this factor results in the simplified form x^{4}. Thus, we say that f(x) is a bigoh of (x^{4}) or mathematically we can write f(x) = O(x^{4}). One may confirm this calculation using the formal definition: let f(x) = 6x^{4} − 2x^{3} + 5 and g(x) = x^{4}. Applying the formal definition from above, the statement that f(x) = O(x^{4}) is equivalent to its expansion,
for some suitable choice of x_{0} and M and for all x > x_{0}. To prove this, let x_{0} = 1 and M = 13. Then, for all x > x_{0}:
so
Usage
Big O notation has two main areas of application. In mathematics, it is commonly used to describe how closely a finite series approximates a given function, especially in the case of a truncated Taylor series or asymptotic expansion. In computer science, it is useful in the analysis of algorithms. In both applications, the function g(x) appearing within the O(...) is typically chosen to be as simple as possible, omitting constant factors and lower order terms. There are two formally close, but noticeably different, usages of this notation: infinite asymptotics and infinitesimal asymptotics. This distinction is only in application and not in principle, however—the formal definition for the "big O" is the same for both cases, only with different limits for the function argument.
Infinite asymptotics
Big O notation is useful when analyzing algorithms for efficiency. For example, the time (or the number of steps) it takes to complete a problem of size n might be found to be T(n) = 4n^{2} − 2n + 2. As n grows large, the n^{2} term will come to dominate, so that all other terms can be neglected — for instance when n = 500, the term 4n^{2} is 1000 times as large as the 2n term. Ignoring the latter would have negligible effect on the expression's value for most purposes. Further, the coefficients become irrelevant if we compare to any other order of expression, such as an expression containing a term n^{3} or n^{4}. Even if T(n) = 1,000,000n^{2}, if U(n) = n^{3}, the latter will always exceed the former once n grows larger than 1,000,000 (T(1,000,000) = 1,000,000^{3}= U(1,000,000)). Additionally, the number of steps depends on the details of the machine model on which the algorithm runs, but different types of machines typically vary by only a constant factor in the number of steps needed to execute an algorithm. So the big O notation captures what remains: we write either
or
and say that the algorithm has order of n^{2} time complexity. Note that "=" is not meant to express "is equal to" in its normal mathematical sense, but rather a more colloquial "is", so the second expression is technically accurate (see the "Equals sign" discussion below) while the first is a common abuse of notation.^{[1]}
Infinitesimal asymptotics
Big O can also be used to describe the error term in an approximation to a mathematical function. The most significant terms are written explicitly, and then the leastsignificant terms are summarized in a single big O term. For example,
expresses the fact that the error, the difference , is smaller in absolute value than some constant times  x^{3}  when x is close enough to 0.
Properties
If a function f(n) can be written as a finite sum of other functions, then the fastest growing one determines the order of f(n). For example
In particular, if a function may be bounded by a polynomial in n, then as n tends to infinity, one may disregard lowerorder terms of the polynomial. O(n^{c}) and O(c^{n}) are very different. The latter grows much, much faster, no matter how big the constant c is (as long as it is greater than one). A function that grows faster than any power of n is called superpolynomial. One that grows more slowly than any exponential function of the form c^{n} is called subexponential. An algorithm can require time that is both superpolynomial and subexponential; examples of this include the fastest known algorithms for integer factorization. O(log n) is exactly the same as O(log(n^{c})). The logarithms differ only by a constant factor (since log(n^{c}) = clog n) and thus the big O notation ignores that. Similarly, logs with different constant bases are equivalent. Exponentials with different bases, on the other hand, are not of the same order. For example, 2^{n} and 3^{n} are not of the same order. Changing units may or may not affect the order of the resulting algorithm. Changing units is equivalent to multiplying the appropriate variable by a constant wherever it appears. For example, if an algorithm runs in the order of n^{2}, replacing n by cn means the algorithm runs in the order of c^{2}n^{2}, and the big O notation ignores the constant c^{2}. This can be written as . If, however, an algorithm runs in the order of 2^{n}, replacing n with cn gives 2^{cn} = (2^{c})^{n}. This is not equivalent to 2^{n} in general. Changing of variable may affect the order of the resulting algorithm. For example, if an algorithm's running time is O(n) when measured in terms of the number n of digits of an input number x, then its running time is O(log x) when measured as a function of the input number x itself, because n = Θ(log x).
Product
Sum

 This implies , which means that O(g) is a convex cone.
 If f and g are positive functions,
Multiplication by a constant
 Let k be a constant. Then:
 if k is nonzero.
Multiple variables
Big O (and little o, and Ω...) can also be used with multiple variables. To define Big O formally for multiple variables, suppose and are two functions defined on some subset of . We say
if and only if
For example, the statement
asserts that there exist constants C and M such that
where g(n,m) is defined by
Note that this definition allows all of the coordinates of to increase to infinity. In particular, the statement
(i.e., ) is quite different from
(i.e., ).
Matters of notation
Equals sign
The statement "f(x) is O(g(x))" as defined above is usually written as f(x) = O(g(x)). Some consider this to be an abuse of notation, since the use of the equals sign could be misleading as it suggests a symmetry that this statement does not have. As de Bruijn says, O(x) = O(x^{2}) is true but O(x^{2}) = O(x) is not.^{[2]} Knuth describes such statements as "oneway equalities", since if the sides could be reversed, "we could deduce ridiculous things like n = n^{2} from the identities n = O(n^{2}) and n^{2} = O(n^{2})."^{[3]} For these reasons, it would be more precise to use set notation and write f(x) ∈ O(g(x)), thinking of O(g(x)) as the class of all functions h(x) such that h(x) ≤ Cg(x) for some constant C.^{[3]} However, the use of the equals sign is customary. Knuth pointed out that "mathematicians customarily use the = sign as they use the word 'is' in English: Aristotle is a man, but a man isn't necessarily Aristotle."^{[4]}
Other arithmetic operators
Big O notation can also be used in conjunction with other arithmetic operators in more complicated equations. For example, h(x) + O(f(x)) denotes the collection of functions having the growth of h(x) plus a part whose growth is limited to that of f(x). Thus,
expresses the same as
Example
Suppose an algorithm is being developed to operate on a set of n elements. Its developers are interested in finding a function T(n) that will express how long the algorithm will take to run (in some arbitrary measurement of time) in terms of the number of elements in the input set. The algorithm works by first calling a subroutine to sort the elements in the set and then perform its own operations. The sort has a known time complexity of O(n^{2}), and after the subroutine runs the algorithm must take an additional 55n^{3} + 2n + 10 time before it terminates. Thus the overall time complexity of the algorithm can be expressed as
This can perhaps be most easily read by replacing O(n^{2}) with "some function that grows asymptotically slower than n^{2} ". Again, this usage disregards some of the formal meaning of the "=" and "+" symbols, but it does allow one to use the big O notation as a kind of convenient placeholder.
Declaration of variables
Another feature of the notation, although less exceptional, is that function arguments may need to be inferred from the context when several variables are involved. The following two righthand side big O notations have dramatically different meanings:
The first case states that f(m) exhibits polynomial growth, while the second, assuming m > 1, states that g(n) exhibits exponential growth. To avoid confusion, some authors use the notation
rather than the less explicit
Multiple usages
In more complicated usage, O(...) can appear in different places in an equation, even several times on each side. For example, the following are true for
The meaning of such statements is as follows: for any functions which satisfy each O(...) on the left side, there are some functions satisfying each O(...) on the right side, such that substituting all these functions into the equation makes the two sides equal. For example, the third equation above means: "For any function f(n) = O(1), there is some function g(n) = O(e^{n}) such that n^{f(n)} = g(n)." In terms of the "set notation" above, the meaning is that the class of functions represented by the left side is a subset of the class of functions represented by the right side. In this use the "=" is a formal symbol that unlike the usual use of "=" is not a symmetric relation. Thus for example n^{O(1)} = O(e^{n}) does not imply the false statement O(e^{n}) = n^{O(1)}.
Orders of common functions
Here is a list of classes of functions that are commonly encountered when analyzing the running time of an algorithm. In each case, c is a constant and n increases without bound. The slowergrowing functions are generally listed first. See table of common time complexities for a more comprehensive list.
Notation Name Example constant Determining if a number is even or odd; using a constantsize lookup table or hash table double logarithmic Finding an item using interpolation search in a sorted array of uniformly distributed values. logarithmic Finding an item in a sorted array with a binary search or a balanced search tree as well as all operations in a Binomial heap. fractional power Searching in a kdtree linear Finding an item in an unsorted list or a malformed tree (worst case) or in an unsorted array; Adding two nbit integers by ripple carry. linearithmic, loglinear, or quasilinear Performing a Fast Fourier transform; heapsort, quicksort (best and average case), or merge sort quadratic Multiplying two ndigit numbers by a simple algorithm; bubble sort (worst case or naive implementation), Shell sort, quicksort (worst case), selection sort or insertion sort polynomial or algebraic Treeadjoining grammar parsing; maximum matching for bipartite graphs
Lnotation or subexponential Factoring a number using the quadratic sieve or number field sieve exponential Finding the (exact) solution to the travelling salesman problem using dynamic programming; determining if two logical statements are equivalent using bruteforce search factorial Solving the traveling salesman problem via bruteforce search; generating all unrestricted permutations of a poset; finding the determinant with expansion by minors. The statement is sometimes weakened to to derive simpler formulas for asymptotic complexity. For any k > 0 and c > 0, O(n^{c}(log n)^{k}) is a subset of O(n^{c + ε}) for any , so may be considered as a polynomial with some bigger order.
Related asymptotic notations
Big O is the most commonly used asymptotic notation for comparing functions, although in many cases Big O may be replaced with Big Theta Θ for asymptotically tighter bounds. Here, we define some related notations in terms of Big O, progressing up to the family of Bachmann–Landau notations to which Big O notation belongs.
Littleo notation
The relation is read as "f(x) is littleo of g(x)". Intuitively, it means that g(x) grows much faster than f(x), or similarly, the growth of f(x) is nothing compared to that of g(x). It assumes that f and g are both functions of one variable. Formally, f(n) = o(g(n)) as n → ∞ means that for every positive constant ε there exists a constant N such that
 ^{[3]}
Note the difference between the earlier formal definition for the bigO notation, and the present definition of littleo: while the former has to be true for at least one constant M the latter must hold for every positive constant ε, however small.^{[1]} In this way littleo notation makes a stronger statement than the corresponding bigO notation: every function that is littleo of g is also bigO of g, but not every function that is bigO g is also littleo of g (for instance g itself is not, unless it is identically zero near ∞).
If g(x) is nonzero, or at least becomes nonzero beyond a certain point, the relation f(x) = o(g(x)) is equivalent to
For example,
Littleo notation is common in mathematics but rarer in computer science. In computer science the variable (and function value) is most often a natural number. In mathematics, the variable and function values are often real numbers. The following properties can be useful:
 (and thus the above properties apply with most combinations of o and O).
As with big O notation, the statement "f(x) is o(g(x))" is usually written as f(x) = o(g(x)), which is a slight abuse of notation.
Family of Bachmann–Landau notations
Notation Name Intuition As , eventually... Definition Big Omicron; Big O; Big Oh f is bounded above by g (up to constant factor) asymptotically for some k
or
(Note that, since the beginning of the 20th century, papers in number theory have been increasingly and widely using this notation in the weaker sense that f = o(g) is false) Big Omega f is bounded below by g (up to constant factor) asymptotically for some positive k Big Theta f is bounded both above and below by g asymptotically for some positive k_{1}, k_{2} Small Omicron; Small O; Small Oh f is dominated by g asymptotically for every ε Small Omega f dominates g asymptotically for every k on the order of; "twiddles" f is equal to g asymptotically Bachmann–Landau notation was designed around several mnemonics, as shown in the As , eventually... column above and in the bullets below. To conceptually access these mnemonics, "omicron" can be read "omicron" and "omega" can be read "omega". Also, the lowercase versus capitalization of the Greek letters in Bachmann–Landau notation is mnemonic.
 The omicron mnemonic: The omicron reading of and of can be thought of as "Osmaller than" and "osmaller than", respectively. This micro/smaller mnemonic refers to: for sufficiently large input parameter(s), f grows at a rate that may henceforth be less than cg regarding or .
 The omega mnemonic: The omega reading of and of can be thought of as "Olarger than". This mega/larger mnemonic refers to: for sufficiently large input parameter(s), f grows at a rate that may henceforth be greater than cg regarding or .
 The uppercase mnemonic: This mnemonic reminds us when to use the uppercase Greek letters in and : for sufficiently large input parameter(s), f grows at a rate that may henceforth be equal to cg regarding .
 The lowercase mnemonic: This mnemonic reminds us when to use the lowercase Greek letters in and : for sufficiently large input parameter(s), f grows at a rate that is henceforth inequal to cg regarding .
Aside from Big O notation, the Big Theta Θ and Big Omega Ω notations are the two most often used in computer science; the Small Omega ω notation is rarely used in computer science.
Use in computer science
For more details on this topic, see Analysis of algorithms.Informally, especially in computer science, the Big O notation often is permitted to be somewhat abused to describe an asymptotic tight bound where using Big Theta Θ notation might be more factually appropriate in a given context. For example, when considering a function T(n) = 73n^{3} + 22n^{2} + 58, all of the following are generally acceptable, but tightnesses of bound (i.e., bullets 2 and 3 below) are usually strongly preferred over laxness of bound (i.e., number 1 below).
 T(n) = O(n^{100}), which is identical to T(n) ∈ O(n^{100})
 T(n) = O(n^{3}), which is identical to T(n) ∈ O(n^{3})
 T(n) = Θ(n^{3}), which is identical to T(n) ∈ Θ(n^{3}).
The equivalent English statements are respectively:
 T(n) grows asymptotically no faster than n^{100}
 T(n) grows asymptotically no faster than n^{3}
 T(n) grows asymptotically as fast as n^{3}.
So while all three statements are true, progressively more information is contained in each. In some fields, however, the Big O notation (number 2 in the lists above) would be used more commonly than the Big Theta notation (bullets number 3 in the lists above) because functions that grow more slowly are more desirable. For example, if T(n) represents the running time of a newly developed algorithm for input size n, the inventors and users of the algorithm might be more inclined to put an upper asymptotic bound on how long it will take to run without making an explicit statement about the lower asymptotic bound.
Extensions to the Bachmann–Landau notations
Another notation sometimes used in computer science is Õ (read softO): f(n) = Õ(g(n)) is shorthand for f(n) = O(g(n) log^{k} g(n)) for some k. Essentially, it is Big O notation, ignoring logarithmic factors because the growthrate effects of some other superlogarithmic function indicate a growthrate explosion for largesized input parameters that is more important to predicting bad runtime performance than the finerpoint effects contributed by the logarithmicgrowth factor(s). This notation is often used to obviate the "nitpicking" within growthrates that are stated as too tightly bounded for the matters at hand (since log^{k} n is always o(n^{ε}) for any constant k and any ε > 0). The L notation, defined as
is convenient for functions that are between polynomial and exponential.
The generalization to functions taking values in any normed vector space is straightforward (replacing absolute values by norms), where f and g need not take their values in the same space. A generalization to functions g taking values in any topological group is also possible. The "limiting process" x→x_{o} can also be generalized by introducing an arbitrary filter base, i.e. to directed nets f and g. The o notation can be used to define derivatives and differentiability in quite general spaces, and also (asymptotical) equivalence of functions,
which is an equivalence relation and a more restrictive notion than the relationship "f is Θ(g)" from above. (It reduces to lim f / g = 1 if f and g are positive real valued functions.) For example, 2x is Θ(x), but 2x − x is not o(x).
Graph theory
It is often useful to bound the running time of graph algorithms. Unlike most other computational problems, for a graph G = (V, E) there are two relevant parameters describing the size of the input: the number V of vertices in the graph and the number E of edges in the graph. Inside asymptotic notation (and only there), it is common to use the symbols V and E, when someone really means V and E. This convention simplifies asymptotic functions and make them easily readable. The symbols V and E are never used inside asymptotic notation with their literal meaning, since the number of vertices and edges must be nonnegative, so this abuse of notation does not risk ambiguity. For example O(E + Vlog V) means for a suitable metric of graphs. Another common convention—referring to the values V and E by the names n and m, respectively—sidesteps this ambiguity.
History
The notation was first introduced by number theorist Paul Bachmann in 1894, in the second volume of his book Analytische Zahlentheorie ("analytic number theory"), the first volume of which (not yet containing big O notation) was published in 1892.^{[5]} The notation was popularized in the work of number theorist Edmund Landau; hence it is sometimes called a Landau symbol. It was popularized in computer science by Donald Knuth, who reintroduced the related Omega and Theta notations.^{[6]} He also noted that the (then obscure) Omega notation had been introduced by Hardy and Littlewood^{[7]} under a slightly different meaning, and proposed the current definition. Hardy's symbols were (in terms of the modern O notation)
 and
other similar symbols were sometimes used, such as and . The bigO, standing for "order of", was originally a capital omicron; today the identicallooking Latin capital letter O is used, but never the digit zero.
See also
 Asymptotic expansion: Approximation of functions generalizing Taylor's formula
 Asymptotically optimal: A phrase frequently used to describe an algorithm that has an upper bound asymptotically within a constant of a lower bound for the problem
 Limit superior and limit inferior: An explanation of some of the limit notation used in this article
 Nachbin's theorem: A precise method of bounding complex analytic functions so that the domain of convergence of integral transforms can be stated
 Big O in probability notation: O_{p},o_{p}
 Computational complexity theory: A subfield strongly related to this article
References
 ^ ^{a} ^{b} Thomas H. Cormen et al., 2001, Introduction to Algorithms, Second Edition
 ^ N. G. de Bruijn (1958). Asymptotic Methods in Analysis. Amsterdam: NorthHolland. pp. 5–7. ISBN 9780486642215. http://books.google.com/?id=_tnwmvHmVwMC&pg=PA5&vq=%22The+trouble+is%22.
 ^ ^{a} ^{b} ^{c} Ronald Graham, Donald Knuth, and Oren Patashnik (1994). Concrete Mathematics (2 ed.). Reading, Massachusetts: AddisonWesley. p. 446. ISBN 9780201558029. http://books.google.com/?id=pntQAAAAMAAJ&dq=editions:ISBN0201558025.
 ^ Donald Knuth (June/July 1998). "Teach Calculus with Big O". Notices of the American Mathematical Society 45 (6): 687. http://www.ams.org/notices/199806/commentary.pdf. (Unabridged version)
 ^ Nicholas J. Higham, Handbook of writing for the mathematical sciences, SIAM. ISBN 0898714206, p. 25
 ^ Donald Knuth. Big Omicron and big Omega and big Theta, ACM SIGACT News, Volume 8, Issue 2, 1976.
 ^ G. H. Hardy and J. E. Littlewood, Some problems of Diophantine approximation, Acta Mathematica 37 (1914), p. 225
Further reading
 Paul Bachmann. Die Analytische Zahlentheorie. Zahlentheorie. pt. 2 Leipzig: B. G. Teubner, 1894.
 Edmund Landau. Handbuch der Lehre von der Verteilung der Primzahlen. 2 vols. Leipzig: B. G. Teubner, 1909.
 G. H. Hardy. Orders of Infinity: The 'Infinitärcalcül' of Paul du BoisReymond, 1910.
 Donald Knuth. The Art of Computer Programming, Volume 1: Fundamental Algorithms, Third Edition. AddisonWesley, 1997. ISBN 0201896834. Section 1.2.11: Asymptotic Representations, pp. 107–123.
 Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGrawHill, 2001. ISBN 0262032937. Section 3.1: Asymptotic notation, pp. 41–50.
 Michael Sipser (1997). Introduction to the Theory of Computation. PWS Publishing. ISBN 053494728X. Pages 226–228 of section 7.1: Measuring complexity.
 Jeremy Avigad, Kevin Donnelly. Formalizing O notation in Isabelle/HOL
 Paul E. Black, "bigO notation", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 11 March 2005. Retrieved December 16, 2006.
 Paul E. Black, "littleo notation", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 17 December 2004. Retrieved December 16, 2006.
 Paul E. Black, "Ω", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 17 December 2004. Retrieved December 16, 2006.
 Paul E. Black, "ω", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 29 November 2004. Retrieved December 16, 2006.
 Paul E. Black, "Θ", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 17 December 2004. Retrieved December 16, 2006.
External links
Wikimedia Foundation. 2010.