- Line search
In (unconstrained) optimization, the line search strategy is one of two basic iterative approaches to finding a local minimum of an
objective function . The other method is that oftrust region s.Algorithms
Gradient method
[N. I. M. Gould and S. Leyffer, An introduction to algorithms for nonlinear optimization. In J. F. Blowey, A. W. Craig, and T. Shardlow, Frontiers in Numerical Analysis, pages 109-197. Springer Verlag, Berlin, 2003.]
# Set iteration counter , and make an initial guess, for the minimum
# Repeat:
# Compute adescent direction
# Choose to 'loosely' minimize over
# Update ,
# Until < toleranceIn step 4 we can either "exactly" minimize , by solving , or "loosely", by asking for a sufficient decrease in . The latter may be performed in a number of ways, perhaps by doing a
backtracking line search or using theWolfe conditions .Like other optimization methods, line search may be combined with
simulated annealing to allow it to jump over some local minima.Direct search methods
First the minimum must be bracketed, that is, it should lie between x1 and x2. The interval is then divided by computing f(x) at two internal points, x3 and x4, and rejecting whichever of the two outer points has the highest function value. Subsequently only one extra internal point needs to be calculated. Of the various methods of dividing the interval [M.J. Box, D. Davies and W.H. Swann, Non-Linear optimisation Techniques, Oliver & Boyd, 1969] , those that use the golden ratio are particularly simple and effective. :
ee also
*
Secant method
*Newton–Raphson method References
Wikimedia Foundation. 2010.