- Integral test for convergence
In
mathematics , the integral test for convergence is a method used to test infinite series ofnon-negative terms forconvergence . An early form of the test of convergence was developed in India by Madhava in the14th century , and by his followers at theKerala School . In Europe, it was later developed by Maclaurin and Cauchy and is sometimes known as the Maclaurin–Cauchy test.Statement of the test
Consider an
integer "N" and a non-negative monotone decreasing function "f" defined on the unbounded interval[ "N", ∞). Then the series:sum_{n=N}^infty f(n)
converges if and only if the
integral :int_N^infty f(x),dx
is finite. In particular, if the integral diverges, then the series diverges as well.
Proof
The proof basically uses the
comparison test , comparing the term "f"("n") with the integral of "f" over the intervals[ "n" − 1, "n"] and[ "n", "n" + 1] , respectively.Since "f" is a monotone decreasing function, we know that:f(x)le f(n)quad ext{for }xin [n,infty)and:f(n)le f(x)quad ext{for }xin [N,n] ,hence for every "n" larger than "N":int_n^{n+1} f(x),dxleint_{n}^{n+1} f(n),dx=f(n)=int_{n-1}^{n} f(n),dxleint_{n-1}^n f(x),dx.Since the lower estimate is also valid for "f"("N"), we get by summation over all "n" from "N" to some larger integer "M":int_N^{M+1}f(x),dxlesum_{n=N}^MF(n)le f(N)+int_N^M f(x),dx. Letting "M" tend to infinity, the result follows.
Applications
The harmonic series:sum_{n=1}^infty frac1ndiverges because, using the
natural logarithm , itsderivative , and thefundamental theorem of calculus , we get:int_1^Mfrac1x,dx=ln xBigr|_1^M=ln M oinftyquad ext{for }M oinfty.Contrary, the series:sum_{n=1}^infty frac1{n^{1+varepsilon(cf.Riemann zeta function )converges for every "ε" > 0, because:int_1^Mfrac1{x^{1+varepsilon,dx=-frac1{varepsilon x^varepsilon}iggr|_1^M=frac1varepsilonBigl(1-frac1{M^varepsilon}Bigr)lefrac1varepsilonquad ext{for all }Mge1.Borderline between divergence and convergence
The above examples involving the harmonic series raise the question, whether there are monotone sequences such that "f"("n") decreases to 0 faster than 1/"n" but slower than 1/"n"1+"ε" in the sense that:lim_{n oinfty}frac{f(n)}{1/n}=0quad ext{and}quadlim_{n oinfty}frac{f(n)}{1/n^{1+varepsilon=inftyfor every "ε" > 0, and whether the corresponding series of the "f"("n") still diverges. Once such a sequence is found, a similar question can be asked with "f"("n") taking the role of 1/"n", and so on. In this way it is possible to investigate the borderline between divergence and convergence.
Using the integral test for convergence, one can show (see below) that, for every
natural number "k", the series:sum_{n=N_k}^inftyfrac1{nln(n)ln_2(n)cdots ln_{k-1}(n)ln_k(n)}still diverges (cf.proof that the sum of the reciprocals of the primes diverges for "k" = 1) but:sum_{n=N_k}^inftyfrac1{nln(n)ln_2(n)cdotsln_{k-1}(n)(ln_k(n))^{1+varepsilonconverges for every "ε" > 0. Here ln"k" denotes the "k"-fold composition of the natural logarithm defined recursively by:ln_k(x)=egin{cases}ln(x)& ext{for }k=1,\ln(ln_{k-1}(x))& ext{for }kge2.end{cases}Furthermore, "N""k" denotes the smallest natural number such that the "k"-fold composition is well-defined and ln"k" "N""k" ≥ 1, i.e.:N_kge underbrace{e^{e^{cdot^{cdot^{e}_{k e' ext{s=e uparrowuparrow k usingtetration orKnuth's up-arrow notation .To see the divergence of the first series using the integral test, note that by repeated application of the
chain rule :frac{d}{dx}ln_{k+1}(x)=frac{d}{dx}ln(ln_k(x))=frac1{ln_k(x)}frac{d}{dx}ln_k(x)=cdots=frac1{xln(x)cdotsln_k(x)},hence:int_{N_k}^inftyfrac{dx}{xln(x)cdotsln_k(x)}=ln_{k+1}(x)igr|_{N_k}^infty=infty.To see the convergence of the second series, note that by the power rule, the chain rule and the above result:frac{d}{dx}frac1{varepsilon(ln_k(x))^varepsilon}=frac1{(ln_k(x))^{1+varepsilonfrac{d}{dx}ln_k(x)=cdots=frac{1}{xln(x)cdotsln_{k-1}(x)(ln_k(x))^{1+varepsilon,hence:int_{N_k}^inftyfrac{dx}{xln(x)cdotsln_{k-1}(x)(ln_k(x))^{1+varepsilon=-frac1{varepsilon(ln_k(x))^varepsilon}iggr|_{N_k}^inftyReferences
* Knopp, Konrad, "Infinite Sequences and Series", Dover publications, Inc., New York, 1956. (§ 3.3) ISBN 0-486-60153-6
* Whittaker, E. T., and Watson, G. N., "A Course in Modern Analysis", fourth edition, Cambridge University Press, 1963. (§ 4.43) ISBN 0-521-58807-3
Wikimedia Foundation. 2010.