Optional stopping theorem

Optional stopping theorem

In probability theory, the optional stopping theorem (or Doob's optional sampling theorem) says that, under certain conditions, the expected value of a martingale at a stopping time is equal to its initial value (and also expected value at any deterministic time). One version of the theorem is given below:

Let X1X2X3, ... be a martingale and τ a stopping time with respect to X1X2X3, ... . If
(a) \mathbb{E}\tau<\infty,
and
(b) there exists a constant c such that \mathbb{E}(\vert X_{i+1} - X_i \vert) \leq c for all i,
then
\mathbb{E}X_{\tau}=\mathbb{E}X_1.

Contents

Applications

  • The optional stopping theorem can be used to prove the impossibility of successful betting strategies for a gambler with a finite lifetime (which gives condition (a)) and a house limit on bets (condition (b)). Suppose that the gambler can wager up to c dollars on a fair coin flip at times 1, 2, 3, etc., winning his wager if the coin comes up heads and losing it if the coin comes up tails. Suppose further that he can quit whenever he likes, but cannot predict the outcome of gambles that haven't happened yet. Then the gambler's fortune over time is a martingale, and the time τ at which he decides to quit (or goes broke and is forced to quit) is a stopping time. So the theorem says that E[Xτ] = E[X1]. In other words, the gambler leaves with the same amount of money on average as when he started. (The same result holds if the gambler, instead of having a house limit on individual bets, has a finite limit on his line of credit or how far in debt he may go, though this is easier to show with another version of the theorem.)[1]
  • Suppose a random walk that goes up or down by one with equal probability on each step. Suppose further that the walk stops if it reaches 0 or m; the time at which this first occurs is a stopping time. If it is known that the expected time at which the walk ends is finite (say, from Markov chain theory), the optional stopping theorem predicts that the expected stop position is equal to the initial position a. Solving a = pm + (1 − p)0 for the probability p that the walk reaches m before 0 gives p = a/m.
  • Now consider a random walk that starts at 0 and stops if it reaches −m or +m, and use the Yn = Xn2 − n martingale from the examples section. If τ is the time at which it first reaches ±m, then 0 = E[Y1] = E[Yτ] = m2 − E[τ]. This gives E[τ] = m2.
  • Care must be taken, however, to ensure that all the conditions of theorem hold. For example, suppose the last example had instead used a 'one-sided' stopping time, so that stopping only occurred at +m, not at −m. The value of X at this stopping time would therefore be m. Therefore, the expectation value E[Xτ] must also be m, seemingly in violation of the theorem which would require E[Xτ]=0. The failure of the optional stopping theorem shows that the expected time for the random walk to exceed any non-zero level must be infinite.

Proof

Assume the conditions in the version given above and let X_t^\tau denote the stopped process. This is also a martingale. Obviously it converges to Xτ almost surely. Now writing the stopped process as

X_t^\tau=X_1+\sum_{i=1}^{\tau \and t-1}(X_{i+1}-X_i)

gives

|X_t^\tau|\leq|X_1|+\sum_{i=1}^{\tau \and t-1}|X_{i+1}-X_i|

so  \mathbb{E}|X_t^\tau| is finite, since

\mathbb{E}|X_t^\tau|\leq\mathbb{E}|X_1|+\sum_{i=1}^{\tau \and t-1}\mathbb{E}|X_{i+1}-X_i|\leq\mathbb{E}|X_1|+c\cdot\mathbb{E}(\tau-1)<\infty.

Hence the dominated convergence theorem implies \mathbb{E}X_1=\mathbb{E}X_1^\tau=\mathbb{E}X_t^\tau\rightarrow\mathbb{E}X_\tau.

External links

References


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Optional Sampling Theorem — Das Optional Sampling Theorem (auch Optional Stopping Theorem) ist eine auf Joseph Doob zurückgehende wahrscheinlichkeitstheoretische Aussage. Eine populäre Version dieses Theorems besagt, dass es bei einem fairen, sich wiederholenden Spiel keine …   Deutsch Wikipedia

  • Stopping time — Example of a stopping time: a hitting time of Brownian motion In probability theory, in particular in the study of stochastic processes, a stopping time (also Markov time) is a specific type of “random time”. The theory of stopping rules and… …   Wikipedia

  • Martingale (probability theory) — For the martingale betting strategy , see martingale (betting system). Stopped Brownian motion is an example of a martingale. It can be used to model an even coin toss betting game with the possibility of bankruptcy. In probability theory, a… …   Wikipedia

  • List of mathematics articles (O) — NOTOC O O minimal theory O Nan group O(n) Obelus Oberwolfach Prize Object of the mind Object theory Oblate spheroid Oblate spheroidal coordinates Oblique projection Oblique reflection Observability Observability Gramian Observable subgroup… …   Wikipedia

  • Martingale (betting system) — For the generalised mathematical concept, see martingale (probability theory). Originally, martingale referred to a class of betting strategies popular in 18th century France. The simplest of these strategies was designed for a game in which the… …   Wikipedia

  • Wald's equation — In probability theory, Wald s equation is an important identity which simplifies the calculation of the expected value of the sum of a random number of random quantities. Formally, it relates the expectation of a sum of randomly many i.i.d.… …   Wikipedia

  • Semimartingale — In probability theory, a real valued process X is called a semimartingale if it can be decomposed as the sum of a local martingale and an adapted finite variation process.Semimartingales are good integrators , forming the largest class of… …   Wikipedia

  • Alexandra Bellow — (1935 ndash;) is a mathematician who has made substantial contributions to the fields of ergodic theory, probability and analysis. BiographyShe was born in Bucharest, Romania, as Alexandra Bagdasar. Her parents were both physicians. Her mother,… …   Wikipedia

  • Control flow — Not to be confused with Flow control. In computer science, control flow (or alternatively, flow of control) refers to the order in which the individual statements, instructions, or function calls of an imperative or a declarative program are… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”