Characteristic function (probability theory)

Characteristic function (probability theory)
The characteristic function of a uniform U(–1,1) random variable. This function is real-valued because it corresponds to a random variable that is symmetric around the origin; however in general case characteristic functions may be complex-valued.

In probability theory and statistics, the characteristic function of any random variable completely defines its probability distribution. Thus it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.

In addition to univariate distributions, characteristic functions can be defined for vector- or matrix-valued random variables, and can even be extended to more generic cases.

The characteristic function always exists when treated as a function of a real-valued argument, unlike the moment-generating function. There are relations between the behavior of the characteristic function of a distribution and properties of the distribution, such as the existence of moments and the existence of a density function.

Contents

Introduction

The characteristic function provides an alternative way for describing a random variable. Similarly to the cumulative distribution function


    F_X(x) = \operatorname{E}[\,\mathbf{1}_{\{X\leq x\}}\,]
  (where 1{X ≤ x} is the indicator function — it is equal to 1 when X ≤ x, and zero otherwise)

which completely determines behavior and properties of the probability distribution of the random variable X, the characteristic function


    \varphi_X(t) = \operatorname{E}[\,e^{itX}\,]

also completely determines behavior and properties of the probability distribution of the random variable X. The two approaches are equivalent in the sense that knowing one of the functions it is always possible to find the other, yet they both provide different insight for understanding the features of the random variable. However, in particular cases, there can be differences in whether these functions can be represented as expressions involving simple standard functions.

If a random variable admits a density function, then the characteristic function is its dual, in the sense that each of them is a Fourier transform of the other. If a random variable has a moment-generating function, then the domain of the characteristic function can be extended to the complex plane, and


    \varphi_X(-it) = M_X(t). \,
  [1]

Note however that the characteristic function of a distribution always exists, even when the probability density function or moment-generating function do not.

The characteristic function approach is particularly useful in analysis of linear combinations of independent random variables. Another important application is to the theory of the decomposability of random variables.

Definition

For a scalar random variable X the characteristic function is defined as the expected value of eitX, where i is the imaginary unit, and tR is the argument of the characteristic function:


    \varphi_X\!:\mathbb{R}\to\mathbb{C}; \quad
                \varphi_X(t) = \operatorname{E}\big[e^{itX}\big] 
                             = \int_{-\infty}^\infty e^{itx}\,dF_X(x) \qquad 
                      \left( = \int_{-\infty}^\infty e^{itx} f_X(x)\,dx \right)

Here FX is the cumulative distribution function of X, and the integral is of the Riemann–Stieltjes kind. If random variable X has a probability density function ƒX, then the characteristic function is its Fourier transform,[2] and the last formula in parentheses is valid.

It should be noted though, that this convention for the constants appearing in the definition of the characteristic function differs from the usual convention for the Fourier transform.[3] For example some authors[4] define φX(t) = Ee−2πitX, which is essentially a change of parameter. Other notation may be encountered in the literature: \scriptstyle\hat p as the characteristic function for a probability measure p, or \scriptstyle\hat f as the characteristic function corresponding to a density ƒ.

The notion of characteristic functions generalizes to multivariate random variables and more complicated random elements. The argument of the characteristic function will always belong to the continuous dual of the space where random variable X takes values. For common cases such definitions are listed below:

  • If X is a k-dimensional random vector, then for tRk
    
    \varphi_X(t) = \operatorname{E}\big[\,\exp({i\,t^T\!X})\,\big],
  • If X is a k×p-dimensional random matrix, then for tRk×p
    
    \varphi_X(t) = \operatorname{E}\big[\,\exp({i\,\operatorname{tr}(t^T\!X)})\,\big],
  • If X is a complex random variable, then for tC [5]
    
    \varphi_X(t) = \operatorname{E}\big[\,\exp({i\,\operatorname{Re}(\overline{t}X)})\,\big],
  • If X is a k-dimensional complex random vector, then for tCk [6]
    
    \varphi_X(t) = \operatorname{E}\big[\,\exp({i\,\operatorname{Re}(t^*\!X)})\,\big],
  • If X(s) is a stochastic process, then for all functions t(s) such that the integral ∫Rt(s)X(s)ds converges for almost all realizations of X [7]
    
    \varphi_X(t) = \operatorname{E}\big[\, \exp({i\int_\mathbb{R} t(s)X(s)ds}) \,\big].

Here T denotes matrix transpose, tr(·) — the matrix trace operator, Re(·) is the real part of a complex number, z denotes complex conjugate, and * is conjugate transpose (that is z* = zT ).

Examples

Distribution Characteristic function φ(t)
Degenerate δa   \! e^{ita}
Bernoulli Bern(p)   \! 1-p+pe^{it}
Binomial B(n, p)   \! (1-p+pe^{it})^n
Negative binomial NB(r, p)   \biggl(\frac{1-p}{1 - p e^{i\,t}}\biggr)^{\!r}
Poisson Pois(λ)   \! e^{\lambda(e^{it}-1)}
Uniform U(a, b)   \! \frac{e^{itb} - e^{ita}}{it(b-a)}
Laplace L(μ, b)   \! \frac{e^{it\mu}}{1 + b^2t^2}
Normal N(μ, σ2)   \! e^{it\mu - \frac{1}{2}\sigma^2t^2}
Chi-square χ2k   \! (1 - 2it)^{-k/2}
Cauchy Cauchy(μ, θ)   \! e^{it\mu -\theta|t|}
Gamma Γ(k, θ)   \! (1 - it\theta)^{-k}
Exponential Exp(λ)   \! (1 - it\lambda^{-1})^{-1}
Multivariate normal N(μ, Σ)   \! e^{it^T\mu - \frac{1}{2}t^T\Sigma t}

Oberhettinger (1973) provides extensive tables of characteristic functions.

Properties

  • The characteristic function of a random variable always exists, since it is an integral of a bounded continuous function over a space whose measure is finite.
  • A characteristic function is uniformly continuous on the entire space
  • It is non-vanishing in a region around zero: φ(0) = 1.
  • It is bounded: | φ(t) | ≤ 1.
  • It is Hermitian: φ(−t) = φ(t). In particular, the characteristic function of a symmetric (around the origin) random variable is real-valued and even.
  • There is a bijection between distribution functions and characteristic functions. That is, for any two random variables X1, X2
F_{X_1}=F_{X_2}\ \Leftrightarrow\ \varphi_{X_1}=\varphi_{X_2}
  • If a random variable X has moments up to k-th order, then the characteristic function φX is k times continuously differentiable on the entire real line. In this case
\operatorname{E}[X^k] = (-i)^k \varphi_X^{(k)}(0).
  • If a characteristic function φX has a k-th derivative at zero, then the random variable X has all moments up to k if k is even, but only up to k – 1 if k is odd.[8]
 \varphi_X^{(k)}(0) = i^k \operatorname{E}[X^k]
  • If X1, …, Xn are independent random variables, and a1, …, an are some constants, then the characteristic function of the linear combination of Xi's is
\varphi_{a_1X_1+\ldots+a_nX_n}(t) = \varphi_{X_1}(a_1t)\cdot \ldots \cdot \varphi_{X_n}(a_nt).

One specific case would be the sum of two independent random variables X1 and X2 in which case one would have \varphi_{X_1+X_2}(t)=\varphi_{X_1}(t)\cdot\varphi_{X_2}(t).

  • The tail behavior of the characteristic function determines the smoothness of the corresponding density function.

Continuity

The bijection stated above between probability distributions and characteristic functions is continuous. That is, whenever a sequence of distribution functions { Fj(x) } converges (weakly) to some distribution F(x), the corresponding sequence of characteristic functions { φj(t) } will also converge, and the limit φ(t) will correspond to the characteristic function of law F. More formally, this is stated as

Lévy’s continuity theorem: A sequence { Xj} of n-variate random variables converges in distribution to random variable X if and only if the sequence { φXj} converges pointwise to a function φ which is continuous at the origin. Then φ is the characteristic function of X.[9]

This theorem is frequently used to prove the law of large numbers, and the central limit theorem.

Inversion formulas

Since there is a one-to-one correspondence between cumulative distribution functions and characteristic functions, it is always possible to find one of these functions if we know the other one. The formula in definition of characteristic function allows us to compute φ when we know the distribution function F (or density ƒ). If, on the other hand, we know the characteristic function φ and want to find the corresponding distribution function, then one of the following inversion theorems can be used.

Theorem. If characteristic function φX is integrable, then FX is absolutely continuous, and therefore X has the probability density function given by


    f_X(x) = F_X'(x) = \frac{1}{2\pi}\int_{-\infty}^{\infty} e^{-itx}\varphi_X(t)dt,
    when X is scalar;

in multivariate case the pdf is understood as the Radon–Nikodym derivative of the distribution μX with respect to the Lebesgue measure λ:


    f_X(x) = \frac{d\mu_X}{d\lambda}(x) = \frac{1}{(2\pi)^n} \int_{\mathbb{R}^n} e^{-i(t\cdot x)}\varphi_X(t)\lambda(dt).

Theorem (Lévy).[10] If φX is characteristic function of distribution function FX, two points a<b are such that {x|a < x < b} is a continuity set of μX (in the univariate case this condition is equivalent to continuity of FX at points a and b), then


    F_X(b) - F_X(a) = \frac{1} {2\pi} \lim_{T \to \infty} \int_{-T}^{+T} \frac{e^{-ita} - e^{-itb}} {it}\, \varphi_X(t)\, dt,
    if X is scalar
\mu_X\big(\{a<x<b\}\big) = \frac{1}{(2\pi)^n} \lim_{T_1\to\infty}\cdots\lim_{T_n\to\infty} \int\limits_{-T_1\leq t_1\leq T_1} \cdots \int\limits_{-T_n \leq t_n \leq T_n} \prod_{k=1}^n\left(\frac{e^{-it_ka_k}-e^{-it_kb_k}}{it_k}\right)\varphi_X(t)\lambda(dt_1 \times \cdots \times dt_n),   if X is a vector random variable.

Theorem. If a is (possibly) an atom of X (in the univariate case this means a point of discontinuity of FX) then

F_X(a) - F_X(a-0) = \lim_{T\to\infty}\frac{1}{2T}\int_{-T}^{+T}e^{-ita}\varphi_X(t)dt,   when X is a scalar random variable
\mu_X(\{a\}) = \lim_{T_1\to\infty}\cdots\lim_{T_n\to\infty} \left(\prod_{k=1}^n\frac{1}{2T_k}\right) \int\limits_{\{-T\leq t\leq T\}} e^{-i(t\cdot x)}\varphi_X(t)\lambda(dt),   when X is a vector random variable.

Theorem (Gil-Pelaez).[11] For a univariate random variable X, if x is a continuity point of FX then

F_X(x) = \frac{1}{2} - \frac{1}{\pi}\int_0^\infty \frac{\operatorname{Im}[e^{-itx}\varphi_X(t)]}{t}\,dt.

Inversion formula for multivariate distributions are available.[12]

Criteria for characteristic functions

It is well-known that any non-decreasing càdlàg function F with limits F(−∞) = 0, F(+∞) = 1 corresponds to a cumulative distribution function of some random variable.

There is also interest in finding similar simple criteria for when a given function φ could be the characteristic function of some random variable. The central result here is Bochner’s theorem, although its usefulness is limited because the main condition of the theorem, non-negative definiteness, is very hard to verify. Other theorems also exist, such as Khinchine’s, Mathias’s, or Cramér’s, although their application is just as difficult. Pólya’s theorem, on the other hand, provides a very simple convexity condition which is sufficient but not necessary. Characteristic functions which satisfy this condition are called Pólya-type.[13]

  • Bochner’s theorem. An arbitrary function \scriptstyle \varphi:\ \mathbb{R}^n\to\mathbb{C} is the characteristic function of some random variable if and only if φ is positive definite, continuous at the origin, and if φ(0) = 1.
  • Khinchine’s criterion. An absolutely continuous complex-valued function φ equal to 1 at the origin is a characteristic function if and only if it admits the representation
    \varphi(t) = \int_{-\infty}^\infty g(t+\theta)\overline{g(\theta)} d\theta .
  • Mathias’ theorem. A real, even, continuous, absolutely integrable function φ equal to 1 at the origin is a characteristic function if and only if
    (-1)^n \int_{-\infty}^\infty \varphi(pt)e^{-t^2/2}H_{2n}(t)dt \geq 0
    for n = 0,1,2,…, and all p > 0. Here H2n denotes the Hermite polynomial of degree 2n.
  • Pólya’s theorem can be used to construct an example of two random variables whose characteristic functions coincide over a finite interval but are different elsewhere.

    Pólya’s theorem. If φ is a real-valued continuous function which satisfies the conditions

    1. φ(0) = 1,
    2. φ is even,
    3. φ is convex for t>0,
    4. φ(∞) = 0,

    then φ(t) is the characteristic function of an absolutely continuous symmetric distribution.

  • A convex linear combination \scriptstyle \sum_n a_n\varphi_n(t) (with \scriptstyle a_n\geq0,\ \sum_n a_n=1) of a finite or a countable number of characteristic functions is also a characteristic function.
  • The product \scriptstyle \prod_n \varphi_n(t) of a finite number of characteristic functions is also a characteristic function. The same holds for an infinite product provided that it converges to a function continuous at the origin.
  • If φ is a characteristic function and α is a real number, then φ, Re[φ], |φ|2, and φ(αt) are also characteristic functions.

Uses

Because of the continuity theorem, characteristic functions are used in the most frequently seen proof of the central limit theorem. The main trick involved in making calculations with a characteristic function is recognizing the function as the characteristic function of a particular distribution.

Basic manipulations of distributions

Characteristic functions are particularly useful for dealing with linear functions of independent random variables. For example, if X1, X2, ..., Xn is a sequence of independent (and not necessarily identically distributed) random variables, and

S_n = \sum_{i=1}^n a_i X_i,\,\!

where the ai are constants, then the characteristic function for Sn is given by


\varphi_{S_n}(t)=\varphi_{X_1}(a_1t)\varphi_{X_2}(a_2t)\cdots \varphi_{X_n}(a_nt) \,\!

In particular, φX+Y(t) = φX(t)φY(t). To see this, write out the definition of characteristic function:

\varphi_{X+Y}(t)=E\left(e^{it(X+Y)}\right)=E\left(e^{itX}e^{itY}\right)=E\left(e^{itX}\right)E\left(e^{itY}\right)=\varphi_X(t) \varphi_Y(t)

Observe that the independence of X and Y is required to establish the equality of the third and fourth expressions.

Another special case of interest is when ai = 1/n and then Sn is the sample mean. In this case, writing X for the mean,

\varphi_{\overline{X}}(t)=\left(\varphi_X(t/n)\right)^n

Moments

Characteristic functions can also be used to find moments of a random variable. Provided that the nth moment exists, characteristic function can be differentiated n times and

\operatorname{E}\left(X^n\right) = i^{-n}\, \varphi_X^{(n)}(0)
  = i^{-n}\, \left[\frac{d^n}{dt^n} \varphi_X(t)\right]_{t=0} \,\!

For example, suppose X has a standard Cauchy distribution. Then φX(t) = e−|t|. See how this is not differentiable at t = 0, showing that the Cauchy distribution has no expectation. Also see that the characteristic function of the sample mean X of n independent observations has characteristic function φX(t) = (e−|t|/n)n = e−|t|, using the result from the previous section. This is the characteristic function of the standard Cauchy distribution: thus, the sample mean has the same distribution as the population itself.

The logarithm of a characteristic function is a cumulant generating function, which is useful for finding cumulants; note that some instead define the cumulant generating function as the logarithm of the moment-generating function, and call the logarithm of the characteristic function the second cumulant generating function.

Data analysis

Characteristic functions can be used as part of procedures for fitting probability distributions to samples of data. Cases where this provides a practicable option compared to other possibilities include fitting the stable distribution since closed form expressions for the density are not available which makes implementation of maximum likelihood estimation difficult. Estimation procedures are available which match the theoretical characteristic function to the empirical characteristic function, calculated from the data. Paulson et al. (1975) and Heathcote (1977) provide some theoretical background for such an estimation procedure. In addition, Yu (2004) describes applications of empirical characteristic functions to fit time series models where likelihood procedures are impractical.

Example

The Gamma distribution with scale parameter θ and a shape parameter k has the characteristic function

(1 - \theta\,i\,t)^{-k}.

Now suppose that we have

 X ~\sim \Gamma(k_1,\theta) \mbox{ and } Y \sim \Gamma(k_2,\theta) \,

with X and Y independent from each other, and we wish to know what the distribution of X + Y is. The characteristic functions are

\varphi_X(t)=(1 - \theta\,i\,t)^{-k_1},\,\qquad \varphi_Y(t)=(1 - \theta\,i\,t)^{-k_2}

which by independence and the basic properties of characteristic function leads to

\varphi_{X+Y}(t)=\varphi_X(t)\varphi_Y(t)=(1 - \theta\,i\,t)^{-k_1}(1 - \theta\,i\,t)^{-k_2}=\left(1 - \theta\,i\,t\right)^{-(k_1+k_2)}.

This is the characteristic function of the gamma distribution scale parameter θ and shape parameter k1 + k2, and we therefore conclude

X+Y \sim \Gamma(k_1+k_2,\theta) \,

The result can be expanded to n independent gamma distributed random variables with the same scale parameter and we get

\forall i \in \{1,\ldots, n\} :  X_i \sim \Gamma(k_i,\theta) \qquad \Rightarrow \qquad \sum_{i=1}^n X_i \sim \Gamma\left(\sum_{i=1}^nk_i,\theta\right).

Entire characteristic functions

As defined above, the argument of the characteristic function is treated as a real number: however, certain aspects of the theory of characteristic functions are advanced by extending the definition into the complex plane by analytical continuation, in cases where this is possible.[14]

Related concepts

Related concepts include the moment-generating function and the probability-generating function. The characteristic function exists for all probability distributions. However this is not the case for moment generating function.

The characteristic function is closely related to the Fourier transform: the characteristic function of a probability density function p(x) is the complex conjugate of the continuous Fourier transform of p(x) (according to the usual convention; see continuous Fourier transform – other conventions).

\varphi_X(t) = \langle e^{itX} \rangle = \int_{-\infty}^\infty e^{itx}p(x)\, dx = \overline{\left( \int_{-\infty}^\infty e^{-itx}p(x)\, dx \right)} = \overline{P(t)},

where P(t) denotes the continuous Fourier transform of the probability density function p(x). Likewise, p(x) may be recovered from φX(t) through the inverse Fourier transform:

p(x) = \frac{1}{2\pi} \int_{-\infty}^\infty e^{itx} P(t)\, dt = \frac{1}{2\pi} \int_{-\infty}^\infty e^{itx} \overline{\varphi_X(t)}\, dt.

Indeed, even when the random variable does not have a density, the characteristic function may be seen as the Fourier transform of the measure corresponding to the random variable.

See also

  • Subindependence, a weaker condition than independence, that is defined in terms of characteristic functions.

Notes

  1. ^ Lukacs (1970) p. 196
  2. ^ Billingsley (1995)
  3. ^ Pinsky (2002)
  4. ^ Bochner (1955)
  5. ^ Andersen et al. (1995, Definition 1.10)
  6. ^ Andersen et al. (1995, Definition 1.20)
  7. ^ Sobczyk (2001)
  8. ^ Lukacs (1970), Corollary 1 to Theorem 2.3.1
  9. ^ Cuppens (1975, Theorem 2.6.9)
  10. ^ Named after the French mathematician Paul Pierre Lévy
  11. ^ Wendel, J.G. (1961)
  12. ^ Shephard (1991a,b)
  13. ^ Lukacs (1970), p.84
  14. ^ Lukacs (1970, Chapter 7)

References

  • Andersen, H.H., M. Højbjerre, D. Sørensen, P.S. Eriksen (1995). Linear and graphical models for the multivariate complex normal distribution. Lecture notes in statistics 101. New York: Springer-Verlag. ISBN 0-387-94521-0. 
  • Billingsley, Patrick (1995). Probability and measure (3rd ed.). John Wiley & Sons. ISBN 0-471-00710-2. 
  • Bisgaard, T. M.; Z. Sasvári (2000). Characteristic functions and moment sequences. Nova Science. 
  • Bochner, Salomon (1955). Harmonic analysis and the theory of probability. University of California Press. 
  • Cuppens, R. (1975). Decomposition of multivariate probabilities. Academic Press. 
  • Heathcote, C.R. (1977). "The integrated squared error estimation of parameters". Biometrika 64 (2): 255–264. doi:10.1093/biomet/64.2.255. 
  • Lukacs, E. (1970). Characteristic functions. London: Griffin. 
  • Oberhettinger, Fritz (1973). Fourier Transforms of Distributions and their Inverses: A Collection of Tables. Aciademic Press 
  • Paulson, A.S.; E.W. Holcomb, R.A. Leitch (1975). "The estimation of the parameters of the stable laws". Biometrika 62 (1): 163–170. doi:10.1093/biomet/62.1.163. 
  • Pinsky, Mark (2002). Introduction to Fourier analysis and wavelets. Brooks/Cole. ISBN 0-534-37660-6. 
  • Sobczyk, Kazimierz (2001). Stochastic differential equations. Kluwer Academic Publishers. ISBN 9781402003455. 
  • Wendel, J.G. (1961). "The non-absolute convergence of Gil-Pelaez' inversion integral". The Annals of Mathematical Statistics 32 (1): 338–339. doi:10.1214/aoms/1177705164. 
  • Yu, J. (2004). "Empirical characteristic function estimation and its applications". Econometrics Reviews 23 (2): 93–1223. doi:10.1081/ETC-120039605. 
  • Shephard, N. G. (1991a) From characteristic function to distribution function: A simple framework for the theory. Econometric Theory, 7, 519–529.
  • Shephard, N. G. (1991b) Numerical integration rules for multivariate inversions. J. Statist. Comput. Simul., 39, 37–46.

Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Characteristic function — In mathematics, characteristic function can refer to any of several distinct concepts: The most common and universal usage is as a synonym for indicator function, that is the function which for every subset A of X, has value 1 at points of A and… …   Wikipedia

  • Smoothness (probability theory) — In probability theory and statistics, smoothness of a density function is a measure which determines how many times the density function can be differentiated, or equivalently the limiting behavior of distribution’s characteristic function.… …   Wikipedia

  • Independence (probability theory) — In probability theory, to say that two events are independent intuitively means that the occurrence of one event makes it neither more nor less probable that the other occurs. For example: The event of getting a 6 the first time a die is rolled… …   Wikipedia

  • Probability density function — Boxplot and probability density function of a normal distribution N(0, σ2). In probability theory, a probability density function (pdf), or density of a continuous random variable is a function that describes the relative likelihood for this… …   Wikipedia

  • Probability distribution — This article is about probability distribution. For generalized functions in mathematical analysis, see Distribution (mathematics). For other uses, see Distribution (disambiguation). In probability theory, a probability mass, probability density …   Wikipedia

  • List of probability topics — This is a list of probability topics, by Wikipedia page. It overlaps with the (alphabetical) list of statistical topics. There are also the list of probabilists and list of statisticians.General aspects*Probability *Randomness, Pseudorandomness,… …   Wikipedia

  • Probability — is the likelihood or chance that something is the case or will happen. Probability theory is used extensively in areas such as statistics, mathematics, science and philosophy to draw conclusions about the likelihood of potential events and the… …   Wikipedia

  • probability and statistics — ▪ mathematics Introduction       the branches of mathematics concerned with the laws governing random events, including the collection, analysis, interpretation, and display of numerical data. Probability has its origin in the study of gambling… …   Universalium

  • Probability-generating function — In probability theory, the probability generating function of a discrete random variable is a power series representation (the generating function) of the probability mass function of the random variable. Probability generating functions are… …   Wikipedia

  • Characteristic state function — The characteristic state function in statistical mechanics refers to a particular relationship between the partition function of an ensemble. In particular, if the partition function P satisfies P = exp( − βQ) or P = exp( + βQ) in which Q is a… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”