- Power iteration
In
mathematics , the power iteration is aneigenvalue algorithm : given a matrix "A", the algorithm will produce a number λ (theeigenvalue ) and a nonzero vector "v" (the eigenvector), such that "Av" = λ"v".The power iteration is a very simple algorithm. It does not compute a
matrix decomposition , and hence it can be used when "A" is a very largesparse matrix . However, it will find only one eigenvalue (the one with the greatestabsolute value ) and it may converge only slowly.The method
The power iteration algorithm starts with a vector "b"0, which may be an approximation to the dominant eigenvector or a random vector. The method is described by the iteration:So, at every iteration, the vector "b""k" is multiplied by the matrix "A" and normalized.
Under the assumptions:
*"A" has an eigenvalue that is strictly greater in magnitude than its other eigenvalues
*The starting vector has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue.then:
*A subsequence of converges to an eigenvector associated with the dominant eigenvalueNote that the sequence does not necessarily converge. It can be shown that:
where: is an eigenvector associated with the dominant eigenvalue, and . The presence of the term implies that does not converge unless Under the two assumptions listed above, the sequence defined by:converges to the dominant eigenvalue.The method can also be used to calculate the
spectral radius of a matrix by computing theRayleigh quotient :Analysis
Let be decomposed into its
Jordan canonical form : ,where the first column of is an eigenvector of corresponding to the dominant eigenvalue .Since the dominant eigenvalue of is unique,the first Jordan block of is the matrix, where is the largest eigenvalue of "A" in magnitude. The starting vector can be written as a linear combination of the columns of V:.By assumption, has a nonzero component in the direction of the dominant eigenvalue, so .The computationally useful
recurrence relation for can be rewritten as: ,where the expression: is more amenable to the following analysis.
Therefore, converges to (a multiple of) the eigenvector . The convergence is geometric, with ratio:where denotes the second dominant eigenvalue. Thus, the method converges slowly if there is an eigenvalue close in magnitude to the dominant eigenvalue.Applications
Power iteration is not used very much because it can find only the dominant eigenvalue. Nevertheless, the algorithm is very useful in some specific situations. For instance,
Google uses it to calculate the page rank of documents in their search engine. [Ipsen, Ilse, and Rebecca M. Wills, " [http://www4.ncsu.edu/~ipsen/ps/slides_imacs.pdf Analysis and Computation of Google's PageRank] ", 7th IMACS International Symposium on Iterative Methods in Scientific Computing, Fields Institute, Toronto, Canada, 5–8 May 2005.]Some of the more advanced eigenvalue algorithms can be understood as variations of the power iteration. For instance, the
inverse iteration method applies power iteration to the matrix . Other algorithms look at the whole subspace generated by the vectors . This subspace is known as theKrylov subspace . It can be computed byArnoldi iteration orLanczos iteration .See also
*
Rayleigh quotient iteration
*Inverse iteration References
External links
* [http://www.math.buffalo.edu/~pitman/courses/mth437/na2/node17.html Power method] , part of lecture notes on numerical linear algebra by E. Bruce Pitman, State University of New York.
* [http://www.math.gatech.edu/~carlen/2605S04/Power.pdf Power method] on www.math.gatech.edu
* [http://math.fullerton.edu/mathews/n2003/PowerMethodMod.html Module for the Power Method]
Wikimedia Foundation. 2010.