Law of total variance

Law of total variance

In probability theory, the law of total variance or variance decomposition formula states that if "X" and "Y" are random variables on the same probability space, and the variance of "X" is finite, then

:operatorname{var}(X)=operatorname{E}(operatorname{var}(Xmid Y))+operatorname{var}(operatorname{E}(Xmid Y)).,

In language perhaps better known to statisticians than to probabilists, the two terms are the "unexplained" and the "explained component of the variance" (cf. explained variation).

The nomenclature in this article's title parallels the phrase "law of total probability". Some writers on probability call this the "conditional variance formula" or use other names.

(The conditional expected value E( "X" | "Y" ) is a random variable in its own right, whose value depends on the value of "Y". Notice that the conditional expected value of "X" given the "event" "Y" = "y" is a function of "y" (this is where adherence to the conventional rigidly case-sensitive notation of probability theory becomes important!). If we write E( "X" | "Y" = "y") = "g"("y") then the random variable E( "X" | "Y" ) is just "g"("Y"). Similar comments apply to the conditional variance.)

Proof

The law of total variance can be proved using the law of total expectation: First,

:operatorname{Var} [X] = operatorname{E} [X^2] - operatorname{E} [X] ^2

from the definition of variance. Then we apply the law of total expectation by conditioning on the random variable "Y":

::= operatorname{E} [operatorname{E} [X^2|Y] - operatorname{E} [operatorname{E} [X|Y] ^2

Now we rewrite the conditional second moment of X in terms of its variance and first moment:

::= operatorname{E}!left [operatorname{Var} [X|Y] + operatorname{E} [X|Y] ^2 ight] - operatorname{E} [operatorname{E} [X|Y] ^2

Since expectation of a sum is the sum of expectations, we can now regroup the terms:

::= operatorname{E} [operatorname{Var} [X|Y] + left(operatorname{E} [operatorname{E} [X|Y] ^2] - operatorname{E} [operatorname{E} [X|Y] ^2 ight)

Finally, we recognize the terms in parentheses as the variance of the conditional expectation E ["X"|"Y"] :

::= operatorname{E} [operatorname{Var} [X|Y] + operatorname{Var} [E [X|Y]

The square of the correlation

In case the graph of the conditional expected value is a straight line, i.e., if we have

:operatorname{E}(X mid Y)=a+bY,,

then the explained component of the variance divided by the total variance is just the square of the correlation between "X" and "Y", i.e., in that case,

:{operatorname{var}(operatorname{E}(Xmid Y)) over operatorname{var}(X)} = operatorname{corr}(X,Y)^2.,

Higher moments

A similar law for the third central moment μ3 says

:mu_3(X)=operatorname{E}(mu_3(Xmid Y))+mu_3(operatorname{E}(Xmid Y))+3,operatorname{cov}(operatorname{E}(Xmid Y),operatorname{var}(Xmid Y)).,

For higher cumulants, a simple and elegant generalization exists. See law of total cumulance.

References

* (Problem 34.10(b))
*


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • Law of total cumulance — In probability theory and mathematical statistics, the law of total cumulance is a generalization to cumulants of the law of total probability, the law of total expectation, and the law of total variance. It has applications in the analysis of… …   Wikipedia

  • Law of total probability — In probability theory, the law (or formula) of total probability is a fundamental rule relating marginal probabilities to conditional probabilities. Contents 1 Statement 2 Applications 3 Other names 4 See …   Wikipedia

  • Law of total expectation — The proposition in probability theory known as the law of total expectation, the law of iterated expectations, the tower rule, the smoothing theorem, among other names, states that if X is an integrable random variable (i.e., a random variable… …   Wikipedia

  • Variance — In probability theory and statistics, the variance of a random variable, probability distribution, or sample is one measure of statistical dispersion, averaging the squared distance of its possible values from the expected value (mean). Whereas… …   Wikipedia

  • Conditional variance — In probability theory and statistics, a conditional variance is the variance of a conditional probability distribution. Particularly in econometrics, the conditional variance is also known as the scedastic function or skedastic function.… …   Wikipedia

  • Law of large numbers — The law of large numbers (LLN) is a theorem in probability that describes the long term stability of the mean of a random variable. Given a random variable with a finite expected value, if its values are repeatedly sampled, as the number of these …   Wikipedia

  • Allan variance — The Allan variance, named after David W. Allan, is a measurement of stability in clocks and oscillators. It is also known as the two sample variance.It is defined as one half of the time average of the squares of the differences between… …   Wikipedia

  • Hardy-Weinberg law — Basic concept in population genetics discovered independently in 1908 by the great English mathematician G(odfrey) H(arold) Hardy and Wilhelm Weinberg, a physician in Germany. The Hardy Weinberg law is a cornerstone of clinical genetics. (In… …   Medical dictionary

  • List of law topics (S-Z) — NOTOC Law [From Old English lagu something laid down or fixed ; legal comes from Latin legalis , from lex law , statute ( [http://www.etymonline.com/index.php?search=law searchmode=none Law] , Online Etymology Dictionary; [http://www.m… …   Wikipedia

  • List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”