- Karhunen-Loève theorem
In the theory of
stochastic process es, the Karhunen-Loève theorem (named afterKari Karhunen andMichel Loève ) is a representation of a stochastic process as an infinite linear combination oforthogonal function s, analogous to aFourier series representation of a function on a bounded interval. In contrast to a Fourier series where the coefficients are real numbers and the expansion basis consists of sinusoidal functions (that is,sine andcosine functions), the coefficients in the Karhunen-Loève theorem arerandom variable s and the expansion basis depends on the process. In fact, the orthogonal basis functions used in this representation are determined by the covariance function of the process. If we regard a stochastic process as a random "function" F, that is, one in which the random value is a function on an interval ["a", "b"] , then this theorem can be considered as a random orthonormal expansion of F.In the case of a "centered" stochastic process {X"t"}"t" ∈ ["a", "b"] (where "centered" means that the expectations E(X"t") are defined and equal to 0 for all values of the parameter "t" in ["a", "b"] ) satisfying a technical continuity condition, admits a decomposition :where Z"k" are pairwise
uncorrelated random variables and the functions "e""k" are continuous real-valued functions on ["a", "b"] which are pairwiseorthogonal in "L"2 ["a", "b"] . The general case of a process which is not centered can be represented by expanding the expectation function (which is a non-random function) in the basis "e""k" .Moreover, if the process is
Gaussian , then the random variables Z"k" are Gaussian andstochastically independent . This result generalizes the "Karhunen-Loève transform". An important example of a centered real stochastic process on [0,1] is theWiener process and the Karhunen-Loève theorem can be used to provide a canonical orthogonal representation for it. In this case the expansion consists of sinusoidal functions.The above expansion into uncorrelated random variables is also known as the "Karhunen-Loève expansion" or "Karhunen-Loève decomposition". The empirical version (i.e., with the coefficients computed from a sample) is known as
Principal component analysis , "Proper orthogonal decomposition (POD)", or the " Hotelling transform".Formulation
We will formulate the result in terms of complex-valued stochastic processes. The results apply to real-valued processes without modification by recognizing that the complex conjugate of a real number is the number itself.
If X and Y are random variables, the
inner product is defined by:
where * represents
complex conjugation .Second order statistics
The inner product is defined if both X and Y have finite second moments, or equivalently, if they are both
square integrable . Note that the inner product is related tocovariance andcorrelation . In particular, for random variables of mean zero, covariance and inner product coincide. The autocovariance function "K"XX is:
:::::
:::::
:::::
If {X"t"}"t" is a centered process, then
:
for all "t". Thus, the autocovariance "K"XX is identical to the autocorrelation "R"XX:
:
Note that if {X"t"}"t" is centered and "t"1, ≤ "t"2, ..., ≤ "t""N" are points in ["a", "b"] , then
:
Statement of the theorem
Theorem. Consider a centered stochastic process {X"t"}"t" indexed by "t" in the interval ["a", "b"] with covariance function CovX. Suppose the covariance function CovX("t","s") is jointly continuous in "t", "s". Then CovX can be regarded as a positive definite kernel and so by
Mercer's theorem , the corresponding integral operator "T" on L2 ["a","b"] (relative to Lebesgue measure on ["a","b"] ) has an orthonormal basis of eigenvectors. Let {"e""i"}"i" be the eigenvectors of "T" corresponding tonon-zero eigenvalues and:Then Z"i" are centered orthogonal random variables and:where the convergence is in the mean and is uniform in "t". Moreover: where λ"i" is the eigenvalue corresponding to the eigenvector "e""i".Cauchy sums
In the statement of the theorem, the integral defining Z"i", can be defined as the limit in the mean of Cauchy sums of random variables::where:
Special case: Gaussian distribution
Since the limit in the mean of jointly Gaussian random variables is jointly Gaussian, and jointly Gaussian random (centered) variables are independent if and only if they are orthogonal, we can also conclude:
Theorem. The variables Z"i" have a joint Gaussian distribution and are stochastically independent if the original process {X"t"}"t" is Gaussian.
In the gaussian case, since the variables Z"i" are independent, we can say more:
:almost surely.
Note that by generalizations of Mercer's theorem we can replace the interval ["a", "b"] with other compact spaces "C" and Lebesgue measure on ["a", "b"] with a Borel measure whose support is "C".
The Wiener process
There are numerous equivalent characterizations of the Wiener process which is a mathematical formalization of
Brownian motion . Here we regard it as the centered standard Gaussian process "B"("t") with covariance function:The eigenvectors of the covariance kernel are easily determined. These are:and the corresponding eigenvalues are:
This gives the following representation of the Wiener process:
Theorem. There is a sequence {W"i"}"i" of independent Gaussian random variables with mean zero and variance 1 such that:Convergence is uniform in "t" and in the L2 norm, that is:uniformly in "t".
References
* I. Guikhman, A. Skorokhod, "Introduction a la Théorie des Processus Aléatoires" Éditions MIR, 1977
* B. Simon, "Functional Integration and Quantum Physics", Academic Press, 1979
* K. Karhunen, Kari, "Über lineare Methoden in der Wahrscheinlichkeitsrechnung", Ann. Acad. Sci. Fennicae. Ser. A. I. Math.-Phys., 1947, No. 37, 1--79
* M. Loève, "Probability theory." Vol. II, 4th ed., Graduate Texts in Mathematics, Vol. 46, Springer-Verlag, 1978, ISBN 0-387-90262-7ee also
*
Principal component analysis
*Proper orthogonal decomposition
*Polynomial chaos
Wikimedia Foundation. 2010.