- Mehrotra predictor-corrector method
Mehrotra's predictor-corrector method in optimization is an implementation of
interior point method s. It was proposed in 1989 bySanjay Mehrotra . [cite journal|last=Mehrotra|first=S.|title=On the implementation of a primal–dual interior point method|journal=SIAM Journal on Optimization|volume=2|year=1992|pages=575–601|doi=10.1137/0802028]The method is based on the fact that at each
iteration of an interior point algorithm it is necessary to compute theCholesky decomposition (factorization) of a large matrix in order to find the search direction. The factorization step is the most computationally expensive step in the algorithm. Therefore it makes sense to use the same decomposition more than once before recomputing it.At each iteration of the algorithm, Mehrotra's predictor-corrector method uses the same Cholesky decomposition to find two different directions: a predictor and a corrector.
The idea is to first compute an optimizing search direction based on a first order term (predictor). The step size that can be taken in this direction is used to evaluate how much centrality correction is needed. Then, a corrector term is computed: this contains both a centrality term and a second order term.
The complete search direction is the sum of the predictor direction and the corrector direction.
Although there is no theoretical complexity bound on it yet, Mehrotra's predictor-corrector method is widely used in practice. ["In 1989, Mehrotra described a practical algorithm for linear programming that remains the basis of most current software; his work appeared in 1992."
cite journal|last=Potra|first=Florian A.|coauthors=Stephen J. Wright|title=Interior-point methods|journal=Journal of Computational and Applied Mathematics|volume=124|year=2000|pages=281–302|doi=10.1016/S0377-0427(00)00433-7] Its corrector step uses the same
Cholesky decomposition found during the predictor step in an effective way, and thus it is only marginally more expensive than a standard interior point algorithm. However, the additional overhead per iteration is usually paid off by a reduction in the number of iterations needed to reach an optimal solution. It also appears to converge very fast when close to the optimum.References
Wikimedia Foundation. 2010.