Stochastic optimization

Stochastic optimization

Stochastic optimization (SO) methods are optimization algorithms which incorporate probabilistic (random) elements, either in the problem data (the objective function, the constraints, etc.), or in the algorithm itself (through random parameter values, random choices, etc.), or in both cite book
author = Spall, J. C.
title = Introduction to Stochastic Search and Optimization
year = 2003
publisher = Wiley
url = http://www.jhuapl.edu/ISSO
] .The concept contrasts with the "deterministic" optimization methods, where the values of the objective function are assumed to be exact, and the computation is completely determined by the values sampled so far.

Methods for stochastic functions

Partly-random input data arise in such areas as real-time estimation and control, simulation-based optimization where Monte Carlo simulations are run as estimates of an actual system cite journal
author = Fu, M. C.
title = Optimization for Simulation: Theory vs. Practice
journal = INFORMS Journal on Computing
year = 2002
volume = 14
pages = 192–227
note = (with discussion by S. Andradóttir, P. Glynn, and J. P. Kelly)
doi = 10.1287/ijoc.14.3.192.113
] , and problems where there is experimental (random) error in the measurements of the criterion. In such cases, knowledge that the function values are contaminated by random "noise" leads naturally to algorithms that use statistical inference tools to estimate the "true" values of the function and/or make statistically optimal decisions about the next steps. Methods of this class include
* stochastic approximation (SA), by Robbins and Monro (1951) cite journal
author = Robbins, H.
coauthors = Monro, S.
title = A Stochastic Approximation Method
journal = Annals of Mathematical Statistics
year = 1951
volume = 22
pages = 400–407
doi = 10.1214/aoms/1177729586
]
** stochastic gradient descent
** finite-difference SA by Kiefer and Wolfowitz (1952) cite journal
author = J. Kiefer
authorlink = Jack Kiefer (mathematician)
coauthors = J. Wolfowitz
title = Stochastic Estimation of the Maximum of a Regression Function
journal = Annals of Mathematical Statistics
year = 1952
volume = 23
pages = 462–466
doi = 10.1214/aoms/1177729392
]
** simultaneous perturbation SA by Spall (1992) cite journal
author = Spall, J. C.
title = Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation
journal = IEEE Transactions on Automatic Control
year = 1992
volume = 37
pages = 332–341
url = http://www.jhuapl.edu/SPSA
doi = 10.1109/9.119632
]

Randomized search methods

On the other hand, even when the data is exact, it is sometimes beneficial to deliberately introduce randomness into the search process as a means of speeding convergence and making the algorithm less sensitive to modeling errors. Further, the injected randomness may provide the necessary impetus to move away from a local solution when searching for a global optimum. Indeed, this randomization principle is known to be a simple and effective way to obtain algorithms with "almost certain" good performance uniformly across all data sets, for all sorts of problems. Stochastic optimization method of this kind include:
* simulated annealing by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi (1983) cite journal
author = S. Kirkpatrick
coauthors = C. D. Gelatt; M. P. Vecchi
title = Optimization by Simulated Annealing
journal = Science
year = 1983
volume = Vol 220
number = 4598
pages = 671–680
url = http://citeseer.ist.psu.edu/kirkpatrick83optimization.html
doi = 10.1126/science.220.4598.671
]
* evolutionary algorithms
** genetic algorithms by Goldberg (1989) cite book
author = Goldberg, D. E.
title = Genetic Algorithms in Search, Optimization, and Machine Learning
year = 1989
publisher = Addison-Wesley
url = http://www-illigal.ge.uiuc.edu
]
* Cross-entropy method by Rubinstein and Kroese (2004) cite book
author = Rubinstein, R. Y
coauthors = Kroese, D.P
title = The Cross-Entropy Method
year = 2004
publisher = Springer-Verlag
url = http://www.cemethod.org
]
* random search by Zhigljavsky (1991) cite book
author = Zhigljavsky, A. A.
title = Theory of Global Random Search
year = 1991
publisher = Kluwer Academic
]
* Stochastic hill climbing

ee also

Machine Learning

References

*Michalewicz, Z. and Fogel, D. B. (2000), "How to Solve It: Modern Heuristics", Springer-Verlag, New York.

External links

oftware

* [http://www.aimms.com/aimms/context/mathematical_programming/stochastic_programming.html AIMMS]
* [http://www.optirisk-systems.com SPInE]
* [http://www.dash-optimization.com/home/products/products_sp.html XPRESS-SP]


Wikimedia Foundation. 2010.

Игры ⚽ Нужна курсовая?

Look at other dictionaries:

  • Stochastic approximation — methods are a family of iterative stochastic optimization algorithms that attempt to find zeroes or extrema of functions which cannot be computed directly, but only estimated via noisy observations. The first, and prototypical, algorithms of this …   Wikipedia

  • Stochastic — (from the Greek Στόχος for aim or guess ) means random.A stochastic process is one whose behavior is non deterministic in that a state s next state is determined both by the process s predictable actions and by a random element. Stochastic crafts …   Wikipedia

  • Stochastic programming — is a framework for modeling optimization problems that involve uncertainty. Whereas deterministic optimization problems are formulated with known parameters, real world problems almost invariably include some unknown parameters. When the… …   Wikipedia

  • Stochastic Diffusion Search — (SDS), was first described in 1989 as a population based, pattern matching algorithm [Bishop, 1989] . It belongs to a family of Swarm Intelligence and naturally inspired search and optimisation algorithms which includes Ant Colony Optimization,… …   Wikipedia

  • Stochastic gradient descent — is a general optimization algorithm, but is typically used to fit the parameters of a machine learning model.In standard (or batch ) gradient descent, the true gradient is used to update the parameters of the model. The true gradient is usually… …   Wikipedia

  • Stochastic tunneling — (STUN) is an approach to global optimization based on the Monte Carlo method sampling of the function to be minimized. Idea Monte Carlo method based optimization techniques sample the objective function by randomly hopping from the current… …   Wikipedia

  • Stochastic neural network — Stochastic neural networks are a type of artificial neural networks, which is a tool of artificial intelligence. They are built by introducing random variations into the network, either by giving the network s neurons stochastic transfer… …   Wikipedia

  • Optimization (mathematics) — In mathematics, the term optimization, or mathematical programming, refers to the study of problems in which one seeks to minimize or maximize a real function by systematically choosing the values of real or integer variables from within an… …   Wikipedia

  • optimization — /op teuh meuh zay sheuhn/ 1. the fact of optimizing; making the best of anything. 2. the condition of being optimized. 3. Math. a mathematical technique for finding a maximum or minimum value of a function of several variables subject to a set of …   Universalium

  • Mathematical optimization — For other uses, see Optimization (disambiguation). The maximum of a paraboloid (red dot) In mathematics, computational science, or management science, mathematical optimization (alternatively, optimization or mathematical programming) refers to… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”