- Stochastic optimization
**Stochastic optimization**(SO) methods are optimizationalgorithm s which incorporate probabilistic (random) elements, either in the problem data (theobjective function , the constraints, etc.), or in the algorithm itself (through random parameter values, random choices, etc.), or in both cite book

author = Spall, J. C.

title = Introduction to Stochastic Search and Optimization

year = 2003

publisher = Wiley

url = http://www.jhuapl.edu/ISSO] .The concept contrasts with the "deterministic" optimization methods, where the values of the objective function are assumed to be exact, and the computation is completely determined by the values sampled so far.**Methods for stochastic functions**Partly-random input data arise in such areas as real-time estimation and control, simulation-based optimization where Monte Carlo simulations are run as estimates of an actual system cite journal

author = Fu, M. C.

title = Optimization for Simulation: Theory vs. Practice

journal = INFORMS Journal on Computing

year = 2002

volume = 14

pages = 192–227

note = (with discussion by S. Andradóttir, P. Glynn, and J. P. Kelly)

doi = 10.1287/ijoc.14.3.192.113] , and problems where there is experimental (random) error in the measurements of the criterion. In such cases, knowledge that the function values are contaminated by random "noise" leads naturally to algorithms that use statistical inference tools to estimate the "true" values of the function and/or make statistically optimal decisions about the next steps. Methods of this class include

*stochastic approximation (SA), by Robbins and Monro (1951) cite journal

author = Robbins, H.

coauthors = Monro, S.

title = A Stochastic Approximation Method

journal = Annals of Mathematical Statistics

year = 1951

volume = 22

pages = 400–407

doi = 10.1214/aoms/1177729586]

**stochastic gradient descent

** finite-difference SA by Kiefer and Wolfowitz (1952) cite journal

author = J. Kiefer

authorlink = Jack Kiefer (mathematician)

coauthors = J. Wolfowitz

title = Stochastic Estimation of the Maximum of a Regression Function

journal = Annals of Mathematical Statistics

year = 1952

volume = 23

pages = 462–466

doi = 10.1214/aoms/1177729392]

** simultaneous perturbation SA by Spall (1992) cite journal

author = Spall, J. C.

title = Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation

journal = IEEE Transactions on Automatic Control

year = 1992

volume = 37

pages = 332–341

url = http://www.jhuapl.edu/SPSA

doi = 10.1109/9.119632]**Randomized search methods**On the other hand, even when the data is exact, it is sometimes beneficial to deliberately introduce randomness into the search process as a means of speeding convergence and making the algorithm less sensitive to modeling errors. Further, the injected randomness may provide the necessary impetus to move away from a local solution when searching for a global optimum. Indeed, this randomization principle is known to be a simple and effective way to obtain algorithms with "almost certain" good performance uniformly across all data sets, for all sorts of problems. Stochastic optimization method of this kind include:

*simulated annealing by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi (1983) cite journal

author = S. Kirkpatrick

coauthors = C. D. Gelatt; M. P. Vecchi

title = Optimization by Simulated Annealing

journal = Science

year = 1983

volume = Vol 220

number = 4598

pages = 671–680

url = http://citeseer.ist.psu.edu/kirkpatrick83optimization.html

doi = 10.1126/science.220.4598.671]

*evolutionary algorithms

**genetic algorithms by Goldberg (1989) cite book

author = Goldberg, D. E.

title = Genetic Algorithms in Search, Optimization, and Machine Learning

year = 1989

publisher = Addison-Wesley

url = http://www-illigal.ge.uiuc.edu]

*Cross-entropy method by Rubinstein and Kroese (2004) cite book

author = Rubinstein, R. Y

coauthors = Kroese, D.P

title = The Cross-Entropy Method

year = 2004

publisher = Springer-Verlag

url = http://www.cemethod.org]

*random search by Zhigljavsky (1991) cite book

author = Zhigljavsky, A. A.

title = Theory of Global Random Search

year = 1991

publisher = Kluwer Academic]

*Stochastic hill climbing **ee also**Machine Learning **References***Michalewicz, Z. and Fogel, D. B. (2000), "How to Solve It: Modern Heuristics", Springer-Verlag, New York.

**External links****oftware*** [

*http://www.aimms.com/aimms/context/mathematical_programming/stochastic_programming.html AIMMS*]

* [*http://www.optirisk-systems.com SPInE*]

* [*http://www.dash-optimization.com/home/products/products_sp.html XPRESS-SP*]

*Wikimedia Foundation.
2010.*