- Quasi-Newton method
In optimization,

**quasi-Newton methods**(also known as variable metric methods) are well-known algorithms for finding localmaxima and minima of functions. Quasi-Newton methods are based on

Newton's method to find thestationary point of a function, where thegradient is 0. Newton's method assumes that the function can be locally approximated as a quadratic in the region around the optimum, and use the first and second derivatives (gradient and Hessian) to find the stationary point. In Quasi-Newton methods theHessian matrix of secondderivatives of the function to be minimized does not need to be computed. The Hessian is updated by analyzing successive gradient vectors instead.Quasi-Newton methods are a generalization of thesecant method to find the root of the first derivative for multidimensional problems. In multi-dimensions the secant equation is under-determined, and quasi-Newton methodsdiffer in how they constrain the solution, typically by adding a simple low-rank update to the current estimate of the Hessian.The first quasi-Newton algorithm was proposed by W.C. Davidon, a physicist working at

Argonne National Laboratory . He developed the first quasi-Newton algorithm in 1959: theDFP updating formula , which was later popularized by Fletcher and Powell in 1963, but is rarely used today. The most common quasi-Newton algorithms are currently theSR1 formula (for symmetric rank one) and the widespreadBFGS method , that was suggested independently by Broyden, Fletcher, Goldfarb, and Shanno, in 1970. The Broyden's class is a linear combination of the DFP and BFGS methods.The SR1 formula does not guarantee the update matrix to maintain positive-definiteness and can be used for indefinite problems. The

Broyden's method does not require the update matrix to be symmetric and it is used to find the root of a general system of equations (rather than the gradient) by updating theJacobian (rather than the Hessian).**Description of the method**As in the Newton's method, one uses the second order approximation to find the minimum of a function $f(x)$.The

Taylor series of $f(x)$ is:::$f(x\_0+Delta\; x)=f(x\_0)+\; abla\; f(x\_0)^T\; Delta\; x+frac\{1\}\{2\}\; Delta\; x^T\; \{B\}\; Delta\; x$,where ($abla\; f$) is thegradient and $B$ theHessian matrix .TheTaylor series of the gradient itself:::$abla\; f(x\_0+Delta\; x)=\; abla\; f(x\_0)+B\; Delta\; x$,is called "secant equation". Solving for $abla\; f(x\_0+Delta\; x\_0)=0$ provides the Newton step:::$Delta\; x\_0=-B^\{-1\}\; abla\; f(x\_0)$,but $B$ is unknown. In one dimension, solving for $B$ and applying the Newton's step with the updated value is equivalent to thesecant method . In multidimensions $B$ isunder determined . Various methods are used to find the solution to the secant equation that is symmetric ($B^T=B$) and closest to the current approximate value $B\_k$ according to some metric $min\_B\; ||B-B\_k||$. An approximate initial value of $B\_0=I$ is often sufficient to achieverapid convergence. The unknown $x\_k$ is updated applying the Newton's step calculatedusing the current approximate Hessian matrix $B\_\{k\}$

* $Delta\; x\_k=-\; alpha\_k\; B\_k^\{-1\}\; abla\; f(x\_k)$, with $alpha$ chosen to satisfy theWolfe conditions ;

* $x\_\{k+1\}=x\_\{k\}+Delta\; x\_k$;

*The gradient computed at the new point $abla\; f(x\_\{k+1\})$, and:$y\_k=\; abla\; f(x\_\{k+1\})-\; abla\; f(x\_\{k\})$,

*is used to update the Hessian $displaystyle\; B\_\{k+1\}$, or directly its inverse $displaystyle\; H\_\{k+1\}=B\_\{k+1\}^\{-1\}$ using theSherman-Morrison formula .The most popular update formulas are:

**ee also***

Newton's method in optimization

*Newton's method

*DFP updating formula

*BFGS method

*SR1 formula

*Broyden's Method **References*** Eventually W.C. Davidon's paper published. William C. Davidon, [

*http://link.aip.org/link/?SJE/1/1/1 Variable Metric Method for Minimization*] , SIOPT Volume 1 Issue 1, Pages 1-17, 1991.

* Nocedal, Jorge & Wright, Stephen J. (1999). Numerical Optimization. Springer-Verlag. ISBN 0-387-98793-2.

* Edwin K.P.Chong and Stanislaw H.Zak, An Introduction to Optimization 2ed, John Wiley & Sons Pte. Ltd. August 2001.

*Wikimedia Foundation.
2010.*