Davidon-Fletcher-Powell formula

Davidon-Fletcher-Powell formula

The Davidon-Fletcher-Powell formula (DFP) finds the solution to the secant equation that is closest to the current estimate and satisfies the curvature condition (see below). It was the first quasi-Newton method which generalize the secant method to a multidimensional problem. This update maintains the symmetry and positive definiteness of the Hessian matrix.

Given a function f(x), its gradient ( abla f), and positive definite Hessian matrix B, the Taylor series is::f(x_k+s_k)=f(x_k)+ abla f(x_k)^T s_k+frac{1}{2} s^T_k {B} s_k ,and the Taylor series of the gradient itself (secant equation):: abla f(x_k+s_k)= abla f(x_k)+B s_k,is used to update B. The DFP formula finds a solution that is symmetric, positive definite and closest to the current approximate value of B_k::B_{k+1}=(I-gamma_k y_k s_k^T) B_k (I-gamma_k s_k y_k^T)+gamma_k y_k y_k^T,where:y_k= abla f(x_k+Delta x_k)- abla f(x_k),:gamma_k =frac{1}{y_k^T s_k}.and B_k is a symmetric and positive definite matrix.The corresponding update to the inverse Hessian approximation H_k=B_k^{-1} is given by::H_{k+1}=H_{k}-frac {H_k y_k y_k^T H_k}{ y_k^T H_k y_k}+frac{s_k s_k^T}{y_k^{T} s_k}.

B is assumed to be positive definite, and the vectors s_k^T and y must satisfy the curvature condition:: s_k^T y_k=s_k^T B s_k>0.The DFP formula is quite effective, but it was soon superseded by the BFGS formula.

ee also

* Newton's method
* Newton's method in optimization
* Quasi-Newton method
* Broyden-Fletcher-Goldfarb-Shanno (BFGS) method
* L-BFGS method
* SR1 formula

References

* W. C. Davidon, Variable metric method for minimization, SIAM Journal on Optimization 1, 1-17 (1991).
* Nocedal, Jorge & Wright, Stephen J. (1999). Numerical Optimization. Springer-Verlag. ISBN 0-387-98793-2.


Wikimedia Foundation. 2010.

Игры ⚽ Нужно решить контрольную?

Look at other dictionaries:

  • Davidon–Fletcher–Powell formula — The Davidon–Fletcher–Powell formula (or DFP; named after William C. Davidon, Roger Fletcher, and Michael J. D. Powell) finds the solution to the secant equation that is closest to the current estimate and satisfies the curvature condition (see… …   Wikipedia

  • Michael J. D. Powell — Michael James David Powell FRS, FIMA (London, 1936 – ) is a British mathematics Professor, retired from Cambridge University, where he earned his bachelors degree and, in 1979, his D.Sc. [1]. He is known for his extensive work in numerical… …   Wikipedia

  • List of numerical analysis topics — This is a list of numerical analysis topics, by Wikipedia page. Contents 1 General 2 Error 3 Elementary and special functions 4 Numerical linear algebra …   Wikipedia

  • List of mathematics articles (D) — NOTOC D D distribution D module D D Agostino s K squared test D Alembert Euler condition D Alembert operator D Alembert s formula D Alembert s paradox D Alembert s principle Dagger category Dagger compact category Dagger symmetric monoidal… …   Wikipedia

  • Charles George Broyden — (February 3, 1933 – May 20, 2011) was a mathematician who specialized in optimization problems and numerical linear algebra.[1][2][3] While a physicist working at English Electric Company from 1961–1965, he adapted the Davidon–Fletcher–Powell… …   Wikipedia

  • Método de Broyden — En análisis numérico, el método de Broyden es un método cuasinewtoniano para la solución numérica de ecuaciones no lineales con más de una variable. Fue descrito originalmente por C. G. Broyden en 1965.[1] Para hallar la solución de la ecuación …   Wikipedia Español

  • Cholesky decomposition — In linear algebra, the Cholesky decomposition or Cholesky triangle is a decomposition of a Hermitian, positive definite matrix into the product of a lower triangular matrix and its conjugate transpose. It was discovered by André Louis Cholesky… …   Wikipedia

  • Gauss–Newton algorithm — The Gauss–Newton algorithm is a method used to solve non linear least squares problems. It can be seen as a modification of Newton s method for finding a minimum of a function. Unlike Newton s method, the Gauss–Newton algorithm can only be used… …   Wikipedia

  • Quasi-Newton method — In optimization, quasi Newton methods (also known as variable metric methods) are well known algorithms for finding local maxima and minima of functions. Quasi Newton methods are based on Newton s method to find the stationary point of a function …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”