- Inverse iteration
In
numerical analysis , inverse iteration is an iterativeeigenvalue algorithm . Based on thepower method , this method improves on its performance. Whereas the "power method" always converges to the largest eigenvalue, "inverse iteration" also enables the choice of which eigenvalue to converge to.The basic idea of power iteration is choosing an initial vector "b" (either an
eigenvector approximation or arandom vector) and iteratively calculating . Except for a set of zero measure, for any initial vector, the result will converge to an eigenvector corresponding to the dominant eigenvalue. In practice, the vector should be normalized after every iteration.Unfortunately, power iteration can only find the eigenvector corresponding to the dominant eigenvalue. We could modify the algorithm so that instead we calculate . This algorithm would find the eigenvector corresponding to the greatest
reciprocal eigenvalue, , according to theinverse eigenvalues theorem . This is still not enough flexibility to find all the eigenvalues/eigenvectors.However, imagine we have an approximation "μ" to the eigenvalue we are interested in. If "μ" is closer to than any other eigenvalue of "A", then is the smallest eigenvalue of the matrix . By calculating , we can find the eigenvector corresponding to this eigenvalue. Once we have a suitable eigenvector approximation, we can use the
Rayleigh quotient to find the eigenvalue.The inverse iteration algorithm converges fairly quickly. We can, however, achieve even faster convergence by using a better eigenvalue approximation each time. This is the idea behind
Rayleigh quotient iteration .The inverse iteration algorithm requires solving a linear system at each step. This would appear to require operations. However, this can be reduced to if we first reduce "A" to Hessenberg form. This is usually done in practice.
See also
*
Power iteration
*Rayleigh quotient iteration
Wikimedia Foundation. 2010.