Sherman–Morrison formula

Sherman–Morrison formula

In mathematics, in particular linear algebra, the Sherman–Morrison formula,Jack Sherman and Winifred J. Morrison, " [http://projecteuclid.org/euclid.aoms/1177729940 Adjustment of an Inverse Matrix Corresponding to Changes in the Elements of a Given Column or a Given Row of the Original Matrix] " (abstract), Annals of Mathematical Statistics, Volume 20, p. 621 (1949).] Jack Sherman, Winifred J. Morrison, [http://projecteuclid.org/euclid.aoms/1177729893 "Adjustment of an Inverse Matrix Corresponding to a Change in One Element of a Given Matrix"] , Annals of Mathematical Statistics, Vol. 21, No. 1, pp. 124-127 (1950) [http://www.ams.org/mathscinet-getitem?mr=35118 MR35118] ] William H. Press, Brian P. Flannery, Saul A. Teukolsky, William T. Vetterling, "", pp.73+. Cambridge University Press (1992). ISBN 0-521-43108-5.] named after Jack Sherman and Winifred J. Morrison, computes the inverse of the sum of an invertible
matrix Aand the dyadic product, u v^T,of a column vector u and a row vector v^T. The Sherman–Morrison formula is a special case of the Woodbury formula.

Statement

Suppose "A" is an invertible square matrix and "u", "v" are vectors. Suppose furthermore that 1 + v^T A^{-1}u eq 0.Thenthe Sherman–Morrison formula states that:(A+uv^T)^{-1} = A^{-1} - {A^{-1}uv^T A^{-1} over 1 + v^T A^{-1}u}.

Here, uv^T is the dyadic product of two vectors "u" and "v". The general form shown here is the one published by BartlettM. S. Bartlett, [http://projecteuclid.org/euclid.aoms/1177729698 "An Inverse Matrix Adjustment Arising in Discriminant Analysis"] , Annals of Mathematical Statistics, Vol. 22, No. 1, pp. 107-111 (1951) [http://www.ams.org/mathscinet-getitem?mr=40068 MR40068] ] .

Application

If the inverse of A is already known, the formula provides a
numerically cheap wayto compute the inverse of A corrected by the matrix uv^T(depending on the point of view, the correction may be seen as a
perturbation or as a rank-1 update).The computation is relatively cheap because the inverse of A+uv^Tdoes not have to be computed from scratch (which in general is expensive),but can be computed by correcting (or perturbing) A^{-1}.Using unit vectors for u or v, individual columns or rows of A may be manipulatedand a correspondingly updated inverse computed relatively cheaply in this way.

In the general case, where A^{-1} is a n times m matrixand u and v are arbitrary vectors (in which case the whole matrix is updated),the computation takes 3mn scalar multiplications [ [http://www.alglib.net/matrixops/general/invupdate.php Update of the inverse matrix by the Sherman-Morrison formula] ] .

If u is a unit vector, only one row of A^{-1} is updated and the computation takes only 2mn scalar multiplications.The same goes if v is a unit vector,in which case only one column of A^{-1} is updated.

If both u and v are unit vectors, only a single element of A^{-1} is updated and the computation takes only mn scalar multiplications.

Verification

We verify the properties of the inverse.A matrix Y (in this case the right-hand side of the Sherman–Morrison formula)is the inverse of a matrix X (in this case A+uv^T)if and only if XY = YX = I.

We first verify that the right hand side (Y) satisfies XY = I."'Note that v^T A^{-1}u is a scalar and can be factored out."':XY = (A + uv^T)left( A^{-1} - {A^{-1} uv^T A^{-1} over 1 + v^T A^{-1}u} ight)

::= AA^{-1} + uv^T A^{-1} - {AA^{-1}uv^T A^{-1} + uv^T A^{-1}uv^T A^{-1} over 1 + v^TA^{-1}u}

::= I + uv^T A^{-1} - {uv^T A^{-1} + uv^T A^{-1}uv^T A^{-1} over 1 + v^T A^{-1}u}

::= I + uv^T A^{-1} - {(1 + v^T A^{-1}u) uv^T A^{-1} over 1 + v^T A^{-1}u}

::= I + uv^T A^{-1} - uv^T A^{-1},

::= I.

In the same way, we can verify that

:YX = left(A^{-1} - {A^{-1}uv^T A^{-1} over 1 + v^T A^{-1}u} ight)(A + uv^T) = I.

See also

* The matrix determinant lemma performs a rank-1 update to a determinant.
* Sherman–Morrison–Woodbury formula

References

External links

*
* [http://www.ocolon.org/editor/template.php?.matrix_sherman_morrison Sherman–Morrison formula online calculation template]


Wikimedia Foundation. 2010.

Игры ⚽ Нужна курсовая?

Look at other dictionaries:

  • Winifred J. Morrison — is an American statistician, known for the Sherman Morrison formula …   Wikipedia

  • Woodbury matrix identity — In mathematics (specifically linear algebra), the Woodbury matrix identity, named after Max A. Woodbury[1][2] says that the inverse of a rank k correction of some matrix can be computed by doing a rank k correction to the inverse of the original… …   Wikipedia

  • Matrix determinant lemma — In mathematics, in particular linear algebra, the matrix determinant lemma[1][2] computes the determinant of the sum of an invertible matrix A and the dyadic product, u vT, of a column vector u and a row vector vT. Contents 1 Statemen …   Wikipedia

  • List of mathematics articles (S) — NOTOC S S duality S matrix S plane S transform S unit S.O.S. Mathematics SA subgroup Saccheri quadrilateral Sacks spiral Sacred geometry Saddle node bifurcation Saddle point Saddle surface Sadleirian Professor of Pure Mathematics Safe prime Safe… …   Wikipedia

  • Quasi-Newton method — In optimization, quasi Newton methods (also known as variable metric methods) are well known algorithms for finding local maxima and minima of functions. Quasi Newton methods are based on Newton s method to find the stationary point of a function …   Wikipedia

  • Binomial inverse theorem — In mathematics, the binomial inverse theorem is useful for expressing matrix inverses in different ways.If A, U, B, V are matrices of sizes p × p , p × q , q × q , q × p , respectively, then:left(mathbf{A}+mathbf{UBV} ight)^{ 1}=mathbf{A}^{ 1}… …   Wikipedia

  • Broyden's method — In mathematics, Broyden s method is a quasi Newton method for the numerical solution of nonlinear equations in more than one variable. It was originally described by C. G. Broyden in 1965. [cite journal last = Broyden first = C. G. title = A… …   Wikipedia

  • Cross-validation (statistics) — Cross validation, sometimes called rotation estimation,[1][2][3] is a technique for assessing how the results of a statistical analysis will generalize to an independent data set. It is mainly used in settings where the goal is prediction, and… …   Wikipedia

  • BFGS method — In mathematics, the Broyden Fletcher Goldfarb Shanno (BFGS) method is a method to solve an unconstrained nonlinear optimization problem. The BFGS method is derived from the Newton s method in optimization, a class of hill climbing optimization… …   Wikipedia

  • Tridiagonal matrix algorithm — The tridiagonal matrix algorithm (TDMA), also known as the Thomas algorithm , is a simplified form of Gaussian elimination that can be used to solve tridiagonal systems of equations. A tridiagonal system for n unknowns may be written as:a i x {i… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”