- Power series
In
mathematics , a power series (in one variable) is aninfinite series of the form:f(x) = sum_{n=0}^infty a_n left( x-c ight)^n = a_0 + a_1 (x-c)^1 + a_2 (x-c)^2 + a_3 (x-c)^3 + cdotswhere "an" represents the coefficient of the "n"th term, "c" is a constant, and "x" varies around "c" (for this reason one sometimes speaks of the series as being "centered" at "c"). This series usually arises as the
Taylor series of some known function; theTaylor series article contains many examples.In many situations "c" is equal to zero, for instance when considering a
Maclaurin series . In such cases, the power series takes the simpler form::f(x) = sum_{n=0}^infty a_n x^n = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ldots.These power series arise primarily in analysis, but also occur incombinatorics (under the name ofgenerating function s) and in electrical engineering (under the name of theZ-transform ). The familiardecimal notation forinteger s can also be viewed as an example of a power series, but with the argument "x" fixed at 10, and with the summation ranging over the integers instead of the naturals. Innumber theory , the concept ofp-adic number s is also closely related to that of a power series.
[exponential function (in blue), and the sum of the first "n"+1 terms of its Maclaurin power series (in red).]Examples
Any
polynomial can be easily expressed as a power series around any center "c", albeit one with most coefficients equal to zero. For instance, the polynomial f(x) = x^2 + 2x + 3 can be written as a power series around the center c=0 as::f(x) = 3 + 2 x + 1 x^2 + 0 x^3 + 0 x^4 + cdots ,or around the center c=1 as::f(x) = 6 + 4 (x-1) + 1(x-1)^2 + 0(x-1)^3 + 0(x-1)^4 + cdots ,or indeed around any other center "c". One can view power series as being like "polynomials of infinite degree," although power series are not polynomials.The
geometric series formula::frac{1}{1-x} = sum_{n=0}^infty x^n = 1 + x + x^2 + x^3 + cdots,which is valid for x|<1, is one of the most important examples of a power series, as are the exponential functionformula::e^x = sum_{n=0}^infty frac{x^n}{n!} = 1 + x + frac{x^2}{2!} + frac{x^3}{3!} + cdots,and the sine formula::sin(x) = sum_{n=0}^infty frac{(-1)^n x^{2n+1{(2n+1)!} = x - frac{x^3}{3!} + frac{x^5}{5!} - frac{x^7}{7!}+cdots,valid for all real x.These power series are also examples ofTaylor series .Negative powers are not permitted in a power series, for instance 1 + x^{-1} + x^{-2} + cdotsis not considered a power series (although it is a
Laurent series ). Similarly, fractional powers such as x^{1/2} are not permitted (but seePuiseux series ). The coefficients a_n are not allowed to depend on x, thus for instance::sin(x) x + sin(2x) x^2 + sin(3x) x^3 + cdots , is not a power series.Radius of convergence
A power series will converge for some values of the variable "x" and may diverge for others. All power series will converge at "x" = "c". There is always a number "r" with 0 ≤ "r" ≤ ∞ such that the series converges whenever |"x" − "c"| < "r" and diverges whenever |"x" − "c"| > "r". The number "r" is called the
radius of convergence of the power series; in general it is given as:r=liminf_{n oinfty} left|a_n ight|^{-frac{1}{nor, equivalently,
r^{-1}=limsup_{n oinfty} left|a_n ight|^{frac{1}{n (see
limit superior and limit inferior ). A fast way to compute it is:r^{-1}=lim_{n oinfty}left|{a_{n+1}over a_n} ight|
if this limit exists.
The series converges absolutely for |"x" - "c"| < "r" and converges uniformly on every
compact subset of {"x" : |"x" − "c"| < "r"}.For |"x" - "c"| = "r", we cannot make any general statement on whether the series converges or diverges. However,
Abel's theorem states that the sum of the series is continuous at "x" if the series converges at "x".Operations on power series
Addition and subtraction
When two functions "f" and "g" are decomposed into power series around the same center "c", the power series of the sum or difference of the functions can be obtained by termwise addition and subtraction. That is, if::f(x) = sum_{n=0}^infty a_n (x-c)^n:g(x) = sum_{n=0}^infty b_n (x-c)^nthen:f(x)pm g(x) = sum_{n=0}^infty (a_n pm b_n) (x-c)^n.
Multiplication and division
With the same definitions above, for the power series of the product and quotient of the functions can be obtained as follows:
:f(x)g(x) = left(sum_{n=0}^infty a_n (x-c)^n ight)left(sum_{n=0}^infty b_n (x-c)^n ight)
:sum_{i=0}^infty sum_{j=0}^infty a_i b_j (x-c)^{i+j}
:sum_{n=0}^infty left(sum_{i=0}^n a_i b_{n-i} ight) (x-c)^n.
The sequence m_n = sum_{i=0}^n a_i b_{n-i} is known as the
convolution of the sequences a_n and b_n.For division, observe:
:f(x)over g(x)} = {sum_{n=0}^infty a_n (x-c)^noversum_{n=0}^infty b_n (x-c)^n} = sum_{n=0}^infty d_n (x-c)^n
:f(x) = left(sum_{n=0}^infty b_n (x-c)^n ight)left(sum_{n=0}^infty d_n (x-c)^n ight)
and then use the above, comparing coefficients.
Differentiation and integration
Once a function is given as a power series, it is continuous wherever it converges and is differentiable on the interior of this set. It can be differentiated and integrated quite easily, by treating every term separately:
::f^prime (x) = sum_{n=1}^infty a_n n left( x-c ight)^{n-1}= sum_{n=0}^infty a_{n+1} left(n+1 ight) left( x-c ight)^{n}
::int f(x),dx = sum_{n=0}^infty frac{a_n left( x-c ight)^{n+1 {n+1} + k = sum_{n=1}^infty frac{a_{n-1} left( x-c ight)^{n {n} + k.
Both of these series have the same radius of convergence as the original one.
Analytic functions
A function "f" defined on some open subset "U" of R or C is called analytic if it is locally given by power series. This means that every "a" ∈ "U" has an open neighborhood "V" ⊆ "U", such that there exists a power series with center "a" which converges to "f"("x") for every "x" ∈ "V".
Every power series with a positive radius of convergence is analytic on the interior of its region of convergence. All
holomorphic function s are complex-analytic. Sums and products of analytic functions are analytic, as are quotients as long as the denominator is non-zero.If a function is analytic, then it is infinitely often differentiable, but in the real case the converse is not generally true. For an analytic function, the coefficients "a""n" can be computed as
::a_n = frac {f^{left( n ight)}left( c ight)} {n!}
where f^{(n)}(c) denotes the "n"th derivative of "f" at "c", and f^{(0)}(c) = f(c). This means that every analytic function is locally represented by its
Taylor series .The global form of an analytic function is completely determined by its local behavior in the following sense: if "f" and "g" are two analytic functions defined on the same connected open set "U", and if there exists an element "c"∈"U" such that "f" ("n")("c") = "g" ("n")("c") for all "n" ≥ 0, then "f"("x") = "g"("x") for all "x" ∈ "U".
If a power series with radius of convergence "r" is given, one can consider
analytic continuation s of the series, i.e. analytic functions "f" which are defined on larger sets than { "x" : |"x" - "c"| < "r" } and agree with the given power series on this set. The number "r" is maximal in the following sense: there always exists acomplex number "x" with |"x" - "c"| = "r" such that no analytic continuation of the series can be defined at "x".The power series expansion of the
inverse function of an analytic function can be determined using theLagrange inversion theorem .Formal power series
In
abstract algebra , one attempts to capture the essence of power series without being restricted to the fields of real and complex numbers, and without the need to talk about convergence. This leads to the concept offormal power series , a concept of great utility inalgebraic combinatorics .Power series in several variables
An extension of the theory is necessary for the purposes of
multivariable calculus . A power series is here defined to be an infinite series of the form::f(x_1,dots,x_n) = sum_{j_1,dots,j_n = 0}^{infty}a_{j_1,dots,j_n} prod_{k=1}^n left(x_k - c_k ight)^{j_k},
where "j" = ("j"1, ..., "j""n") is a vector of natural numbers, the coefficients"a"("j1,...,jn") are usually real or complex numbers, and the center "c" = ("c"1, ..., "c""n") and argument "x" = ("x"1, ..., "x""n") are usually real or complex vectors. In the more convenient
multi-index notation this can be written::f(x) = sum_{alpha in mathbb{N}^n} a_{alpha} left(x - c ight)^{alpha}.
The theory of such series is trickier than for single-variable series, with more complicated regions of convergence. For instance, the power series sum_{n=0}^infty x_1^n x_2^n is absolutely convergent in the set x_1,x_2): |x_1 x_2| < 1} between two hyperbolae. (This is an example of a "log-convex set", in the sense that the set of points log |x_1|, log |x_2|), where x_1,x_2) lies in the above region, is a convex set. More generally, one can show that when c=0, the interior of the region of absolute convergence is always a log-convex set in this sense.) On the other hand, in the interior of this region of convergence one may differentiate and integrate under the series sign, just as one may with ordinary power series.
Order of a power series
Let α be a multi-index for a power series "f"("x"1, "x"2, …, "x""n"). The order of the power series "f" is defined to be the least value |α| such that "a"α ≠ 0, or 0 if "f" ≡ 0. In particular, for a power series "f"("x") in a single variable "x", the order of "f" is the smallest power of "x" with a nonzero coefficient. This definition readily extends to
Laurent series .External links
*
*
* [http://math.fullerton.edu/mathews/c2003/ComplexPowerSeriesMod.html Complex Power Series Module by John H. Mathews]
* [http://demonstrations.wolfram.com/PowersOfComplexNumbers/ Powers of Complex Numbers] by Michael Schreiber,The Wolfram Demonstrations Project .
Wikimedia Foundation. 2010.