- Line search
In (unconstrained) optimization, the line search strategy is one of two basic iterative approaches to finding a local minimum mathbf{x}^* of an
objective function f:mathbb R^n omathbb R. The other method is that oftrust region s.Algorithms
Gradient method
[N. I. M. Gould and S. Leyffer, An introduction to algorithms for nonlinear optimization. In J. F. Blowey, A. W. Craig, and T. Shardlow, Frontiers in Numerical Analysis, pages 109-197. Springer Verlag, Berlin, 2003.]
# Set iteration counter displaystyle k=0, and make an initial guess, mathbf{x}_0 for the minimum
# Repeat:
# Compute adescent direction mathbf{p}_k
# Choose displaystyle alpha_k to 'loosely' minimize phi(alpha)=f(mathbf{x}_k+alphamathbf{p}_k) over alphainmathbb R
# Update mathbf{x}_{k+1}=mathbf{x}_k+alpha_kmathbf{p}_k, displaystyle k=k+1
# Until abla f(mathbf{x}_k)| < toleranceIn step 4 we can either "exactly" minimize displaystyle phi, by solving displaystyle phi'(alpha_k)=0, or "loosely", by asking for a sufficient decrease in displaystyle phi. The latter may be performed in a number of ways, perhaps by doing a
backtracking line search or using theWolfe conditions .Like other optimization methods, line search may be combined with
simulated annealing to allow it to jump over some local minima.Direct search methods
First the minimum must be bracketed, that is, it should lie between x1 and x2. The interval is then divided by computing f(x) at two internal points, x3 and x4, and rejecting whichever of the two outer points has the highest function value. Subsequently only one extra internal point needs to be calculated. Of the various methods of dividing the interval [M.J. Box, D. Davies and W.H. Swann, Non-Linear optimisation Techniques, Oliver & Boyd, 1969] , those that use the golden ratio are particularly simple and effective. :x_4-x_1=x_2-x_3=frac{1}{ au}(x_2-x_1), au=frac{1}{2}(1+sqrt 5)
ee also
*
Secant method
*Newton–Raphson method References
Wikimedia Foundation. 2010.