Markov property

Markov property

In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process. It was named after the Russian mathematician Andrey Markov.[1]

A stochastic process has the Markov property if the conditional probability distribution of future states of the process depends only upon the present state, not on the sequence of events that preceded it. A process with this property is called a Markov process. The term strong Markov property is similar to the Markov property, except that the meaning of "present" is defined in terms of a random variable known as a stopping time.

The term Markov assumption is used to describe a model where the Markov property is assumed to hold, such as a hidden Markov model.

A Markov random field,[2] extends this property to two or more dimensions, or to random variables defined for an interconnected network of items. An example of a model for such a field is the Ising model.

For discrete-time processes with the Markov property, see Markov chain.

Both the terms "Markov property" and "strong Markov property" have been used in connection with a particular "memoryless" property of the exponential distribution.[3]

Contents

Introduction

A stochastic process has the Markov property if the conditional probability distribution of future states of the process (conditional on both past and present values) depends only upon the present state; that is, given the present, the future does not depend on the past. A process with this property is said to be Markovian or a Markov process. The most famous Markov process is a Markov chain. Brownian motion is another well-known Markov process.

History

For some details of the early history of the Markov property see this brief account.

Definition

The following definition applies.[4] An \mathbb{R}^n -valued stochastic process X=(X_t:t\in I) on a probability space (\Omega,\mathcal{F},\mathbb{P}) is said to possess the Markov property if, for each A \in \mathcal{B}(\mathbb{R}^n) and s,t\in I, \, s<t,

\mathbb{P}(X_t \in A |\mathcal{F}_s)=\mathbb{P}(X_t \in A| \sigma(X_s)) ,

where \{\mathcal{F}\}_{t\in I} is the natural filtration and  \mathcal{B}(\mathbb{R}^n) denotes the Borel sigma-algebra on \mathbb{R}^n .

In the case that the process takes discrete values and is indexed by a discrete time, this can be reformulated as follows;

\mathbb{P}(X_n=x_n|X_{n-1}=x_{n-1} \dots X_0=x_0)=\mathbb{P}(X_n=x_n|X_{n-1}=x_{n-1}).

Strong Markov Property

Suppose that X=(X_t:t\geq 0) is a stochastic process on a probability space (\Omega,\mathcal{F},\mathbb{P}) with natural filtration \{\mathcal{F}_t\}_{t\geq 0}. Then X is said to have the strong Markov property if, for each stopping time τ, conditioned on the event \{\tau < \infty\}, the process X_{\tau + \cdot} (which maybe needs to be defined) is independent from \mathcal{F}_{\tau}:=\{A \in \mathcal{F}: \tau \cap A \in \mathcal{F}_t ,\, \ t \geq 0\} and Xτ + tXτ has the same distribution as Xt for each t \geq 0.

The strong Markov property is a stronger property than the ordinary Markov property, since by taking the stopping time τ = t, the ordinary Markov property can be deduced.

Alternative Formulations

Alternatively, the Markov property can be formulated as follows;

\mathbb{E}[f(X_t)|\mathcal{F}_s]=\mathbb{E}[f(X_t)|\sigma(X_s)]

for all t\geq s\geq 0 and f:\mathbb{R}^n\rightarrow \mathbb{R} bounded and measurable.


Applications

A very important application of the Markov property in a generalized form is in Markov chain Monte Carlo computations in the context of Bayesian statistics.

See also

Notes

  1. ^ Markov, A. A. (1954). Theory of Algorithms. [Translated by Jacques J. Schorr-Kon and PST staff] Imprint Moscow, Academy of Sciences of the USSR, 1954 [Jerusalem, Israel Program for Scientific Translations, 1961; available from Office of Technical Services, United States Department of Commerce] Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: Teoriya algorifmov. [QA248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS 60-51085.]
  2. ^ Dodge, Y. (2003) The Oxford Dictionary of Statistical Terms, OUP. ISBN 0-19-850994-4
  3. ^ Feller, W. (1971) Introduction to Probability Theory and Its Applications, Vol II (2nd edition),Wiley. ISBN 0-471-25709-5 (pages 9 and 20)
  4. ^ Pascucci, Andrea. (2011) PDE and Martingale Methods in Option Pricing. Berlin: Springer, 2011. ISBN 8847017807

Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Markov decision process — Markov decision processes (MDPs), named after Andrey Markov, provide a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for… …   Wikipedia

  • Markov perfect equilibrium — A solution concept in game theory Relationships Subset of Subgame perfect equilibrium Significance Proposed by …   Wikipedia

  • Markov chain — A simple two state Markov chain. A Markov chain, named for Andrey Markov, is a mathematical system that undergoes transitions from one state to another, between a finite or countable number of possible states. It is a random process characterized …   Wikipedia

  • Markov process — In probability theory and statistics, a Markov process, named after the Russian mathematician Andrey Markov, is a time varying random phenomenon for which a specific property (the Markov property) holds. In a common description, a stochastic… …   Wikipedia

  • Markov random field — A Markov random field, Markov network or undirected graphical model is a set of variables having a Markov property described by an undirected graph. A Markov random field is similar to a Bayesian network in its representation of dependencies. It… …   Wikipedia

  • Markov network — A Markov network, or Markov random field, is a model of the (full) joint probability distribution of a set mathcal{X} of random variables having the Markov property. A Markov network is similar to a Bayesian network in its representation of… …   Wikipedia

  • Markov model — In probability theory, a Markov model is a stochastic model that assumes the Markov property. Generally, this assumption enables reasoning and computation with the model that would otherwise be intractable. Contents 1 Introduction 2 Markov chain… …   Wikipedia

  • Markov partition — A Markov partition is a tool used in dynamical systems theory, allowing the methods of symbolic dynamics to be applied to the study of hyperbolic systems. By using a Markov partition, the system can be made to resemble a discrete time Markov… …   Wikipedia

  • Markov blanket — In a Bayesian network, the Markov blanket of node A includes its parents, children and the other parents of all of its children. In machine learning, the Markov blanket for a node A in a Bayesian network is the set of nodes composed of A s… …   Wikipedia

  • Markov strategy — In game theory, a Markov strategy is one that does not depend at all on state variables that are functions of the history of the game, except those that affect payoffs. In other words: a player playing a Markovian strategy, conditions his action… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”