- QR algorithm
In
numerical linear algebra , the QR algorithm is aneigenvalue algorithm ; that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR transformation was developed in 1961 by John G.F. Francis (England) and by Vera N. Kublanovskaya (USSR), working independently. [J.G.F. Francis, "The QR Transformation, I" (part 1) , "The Computer Journal", vol. 4, no. 3, pages 265-271 (1961) [http://comjnl.oxfordjournals.org/cgi/content/abstract/4/3/265 online at oxfordjournals.org] ;
"The QR Transformation, II" (part 2), "The Computer Journal", vol. 4, no. 4, pages 332-345 (1962) [http://comjnl.oxfordjournals.org/cgi/content/abstract/4/4/332 online at oxfordjournals.org] .
Vera N. Kublanovskaya, "On some algorithms for the solution of the complete eigenvalue problem," "USSR Computational Mathematics and Mathematical Physics", vol. 3, pages 637–657 (1961). Also published in: "Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki", vol.1, no. 4, pages 555–570 (1961).
For biographical information about John G.F. Francis, see “Notes” in Wikipedia article “
Eigenvalue, eigenvector and eigenspace ”. For biographical information about Vera Kublanovskaya, see Wikipedia article “Vera Kublanovskaya ”.] The basic idea is to perform aQR decomposition , writing the matrix as a product of anorthogonal matrix and an uppertriangular matrix , multiply the factors in the other order, and iterate.The practical QR algorithm
Formally, let "A" be the matrix of which we want to compute the eigenvalues, and let "A"0:="A". At the "k"-th step (starting with "k" = 0), we write "A""k" as the product of an orthogonal matrix "Q""k" and a upper triangular matrix "R""k" and we form "A""k"+1 = "R""k""Q""k". Note that:so all the "A""k" are similar and hence they have the same eigenvalues. The algorithm is numerically stable because it proceeds by "orthogonal" similarity transforms.
Under certain conditions [Golub, G. H. and Van Loan, C. F.: Matrix Computations, 3rd ed., Johns Hopkins University Press, Baltimore, 1996, ISBN 0-8018-5414-8.] , the matrices "A""k" converge to a triangular matrix. The eigenvalues of a triangular matrix are listed on the diagonal, and the eigenvalue problem is solved. In testing for convergence it is impractical to require exact zeros, but the
Gershgorin circle theorem provides a bound on the error.In this crude form the iterations are relatively expensive. This can be mitigated by first bringing the matrix "A" to upper
Hessenberg form (which costs using Householder reduction) with a finite sequence of orthogonal similarity transforms, much like a QR decomposition. Determining the QR decomposition of an upper Hessenberg matrix costs .If the original matrix is symmetric, then the upper Hessenberg matrix is also symmetric and thus tridiagonal, and so are all the "A""k". This procedure costs using Householder reduction. Determining the QR decomposition of a tridiagonal matrix costs .
The rate of convergence depends on the separation between eigenvalues, so a practical algorithm will use shifts, either explicit or implicit, to increase separation and accelerate convergence. A typical symmetric QR algorithm isolates each eigenvalue (then reduces the size of the matrix) with only one or two iterations, making it efficient as well as robust.
The implicit QR algorithm
In modern computational practice [Golub and Van Loan, chapter 7] , the QR algorithm is performed in an implicit version which makes the use of multiple shifts easier to introduce. The matrix is first brought to upper Hessenberg form as in the explicit version; then, at each step, the first column of is transformed via a small-size Householder similarity transformation to the first column of , where , of degree , is the polynomial that defines the shifting strategy (often , where and are the two eigenvalues of the trailing principal submatrix of , the so-called "implicit double-shift"). Then successive Householder transformation of size are performed in order to return the working matrix to upper Hessenberg form. This operation is known as "bulge chasing", due to the peculiar shape of the non-zero entries of the matrix along the steps of the algorithm. As in the first version, deflation is performed as soon as one of the sub-diagonal entries of is sufficiently small.
Renaming proposal
Since in the modern implicit version of the procedure no
QR decomposition s are explicitly performed, some authors, for instance Watkins [Watkins, David S.: The Matrix Eigenvalue Problem: GR and Krylov Subspace Methods, SIAM, Philadelphia, PA, USA, 2007, ISBN 0898716411, 9780898716412] , suggested changing its name to "Francis algorithm". Golub and Van Loan use the term "Francis QR step".Interpretation and convergence
The QR algorithm can be seen as a more sophisticated variation of the basic 'power'
eigenvalue algorithm . Recall that the power algorithm repeatedly multiplies "A" times a single vector, normalizing after each iteration. The vector converges to the eigenvector of the largest eigenvalue. Instead, the QR algorithm works with a complete basis of vectors, using QR decomposition to renormalize (and orthogonalize). For a symmetric matrix "A", upon convergence, "AQ" = "QΛ", where "Λ" is the diagonal matrix of eigenvalues to which "A" converged, and where "Q" is a composite of all the orthogonal similarity transforms required to get there. Thus the columns of "Q" are the eigenvectors.External links
*
* [http://www.math.umn.edu/~olver/aims_/qr.pdf Prof. Peter Olver's notes on orthogonal bases and the workings of the QR algorithm]References
Wikimedia Foundation. 2010.