- Power iteration
In
mathematics , the power iteration is aneigenvalue algorithm : given a matrix "A", the algorithm will produce a number λ (theeigenvalue ) and a nonzero vector "v" (the eigenvector), such that "Av" = λ"v".The power iteration is a very simple algorithm. It does not compute a
matrix decomposition , and hence it can be used when "A" is a very largesparse matrix . However, it will find only one eigenvalue (the one with the greatestabsolute value ) and it may converge only slowly.The method
The power iteration algorithm starts with a vector "b"0, which may be an approximation to the dominant eigenvector or a random vector. The method is described by the iteration:b_{k+1} = frac{Ab_k}{|Ab_k. So, at every iteration, the vector "b""k" is multiplied by the matrix "A" and normalized.
Under the assumptions:
*"A" has an eigenvalue that is strictly greater in magnitude than its other eigenvalues
*The starting vector b_{0} has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue.then:
*A subsequence of left( b_{k} ight) converges to an eigenvector associated with the dominant eigenvalueNote that the sequence left( b_{k} ight) does not necessarily converge. It can be shown that:
b_{k} = e^{i phi k} v_{1} + r_{k}where: v_{1} is an eigenvector associated with the dominant eigenvalue, and r_{k} | ightarrow 0. The presence of the term e^{i phi k}implies that left( b_{k} ight) does not converge unless e^{i phi k} = 1Under the two assumptions listed above, the sequence left( mu_{k} ight) defined by:mu_{k} = frac{b_{k}^{*}Ab_{k{b_{k}^{*}b_{kconverges to the dominant eigenvalue.The method can also be used to calculate the
spectral radius of a matrix by computing theRayleigh quotient :frac{b_k^ op A b_k}{b_k^ op b_k} = frac{b_{k+1}^ op b_k}{b_k^ op b_k}.Analysis
Let A be decomposed into its
Jordan canonical form : A=VJV^{-1},where the first column of V is an eigenvector of A corresponding to the dominant eigenvalue lambda_{1}.Since the dominant eigenvalue of A is unique,the first Jordan block of J is the 1 imes 1 matrixegin{bmatrix} lambda_{1} end{bmatrix} , wherelambda_{1} is the largest eigenvalue of "A" in magnitude. The starting vector b_{0} can be written as a linear combination of the columns of V:b_{0} = c_{1}v_{1} + c_{2}v_{2} + cdots + c_{n}v_{n}.By assumption, b_{0} has a nonzero component in the direction of the dominant eigenvalue, so c_{1} e 0.The computationally useful
recurrence relation for b_{k+1}can be rewritten as: b_{k+1}=frac{Ab_{k{|Ab_{k}=frac{A^{k+1}b_{0{|A^{k+1}b_{0},where the expression: frac{A^{k+1}b_{0{|A^{k+1}b_{0} is more amenable to the following analysis.
egin{matrix}b_{k} &=& frac{A^{k}b_{0{| A^{k} b_{0} \ &=& frac{left( VJV^{-1} ight)^{k} b_{0{|left( VJV^{-1} ight)^{k}b_{0} \ &=& frac{ VJ^{k}V^{-1} b_{0{| V J^{k} V^{-1} b_{0} \ &=& frac{ VJ^{k}V^{-1} left( c_{1}v_{1} + c_{2}v_{2} + cdots + c_{n}v_{n} ight)} {| V J^{k} V^{-1} left( c_{1}v_{1} + c_{2}v_{2} + cdots + c_{n}v_{n} ight) \ &=& frac{ VJ^{k}left( c_{1}e_{1} + c_{2}e_{2} + cdots + c_{n}e_{n} ight)} {| V J^{k} left( c_{1}e_{1} + c_{2}e_{2} + cdots + c_{n}e_{n} ight) \ &=& left( frac{lambda_{1 ight)^{k} frac{c_{1. Therefore, b_k converges to (a multiple of) the eigenvector v_1. The convergence is geometric, with ratio:left| frac{lambda_2}{lambda_1} ight|, where lambda_2 denotes the second dominant eigenvalue. Thus, the method converges slowly if there is an eigenvalue close in magnitude to the dominant eigenvalue.Applications
Power iteration is not used very much because it can find only the dominant eigenvalue. Nevertheless, the algorithm is very useful in some specific situations. For instance,
Google uses it to calculate the page rank of documents in their search engine. [Ipsen, Ilse, and Rebecca M. Wills, " [http://www4.ncsu.edu/~ipsen/ps/slides_imacs.pdf Analysis and Computation of Google's PageRank] ", 7th IMACS International Symposium on Iterative Methods in Scientific Computing, Fields Institute, Toronto, Canada, 5–8 May 2005.]Some of the more advanced eigenvalue algorithms can be understood as variations of the power iteration. For instance, the
inverse iteration method applies power iteration to the matrix A^{-1}. Other algorithms look at the whole subspace generated by the vectors b_k. This subspace is known as theKrylov subspace . It can be computed byArnoldi iteration orLanczos iteration .See also
*
Rayleigh quotient iteration
*Inverse iteration References
External links
* [http://www.math.buffalo.edu/~pitman/courses/mth437/na2/node17.html Power method] , part of lecture notes on numerical linear algebra by E. Bruce Pitman, State University of New York.
* [http://www.math.gatech.edu/~carlen/2605S04/Power.pdf Power method] on www.math.gatech.edu
* [http://math.fullerton.edu/mathews/n2003/PowerMethodMod.html Module for the Power Method]
Wikimedia Foundation. 2010.