- Cochran's theorem
-
In statistics, Cochran's theorem, devised by William G. Cochran,[1] is a theorem used in to justify results relating to the probability distributions of statistics that are used in the analysis of variance.[2]
Contents
Statement
Suppose U1, ..., Un are independent standard normally distributed random variables, and an identity of the form
can be written, where each Qi is a sum of squares of linear combinations of the Us. Further suppose that
where ri is the rank of Qi. Cochran's theorem states that the Qi are independent, and each Qi has a chi-square distribution with ri degrees of freedom.[citation needed]
Here the rank of Qi should be interpreted as meaning the rank of the matrix B(i), with elements Bj,k(i), in the representation of Qi as a quadratic form:
Less formally, it is the number of linear combinations included in the sum of squares defining Qi, provided that these linear combinations are linearly independent.
Examples
Sample mean and sample variance
If X1, ..., Xn are independent normally distributed random variables with mean μ and standard deviation σ then
is standard normal for each i. It is possible to write
(here, summation is from 1 to n, that is over the observations). To see this identity, multiply throughout by σ2 and note that
and expand to give
The third term is zero because it is equal to a constant times
and the second term has just n identical terms added together. Thus
and hence
Now the rank of Q2 is just 1 (it is the square of just one linear combination of the standard normal variables). The rank of Q1 can be shown to be n − 1, and thus the conditions for Cochran's theorem are met.
Cochran's theorem then states that Q1 and Q2 are independent, with chi-squared distributions with n − 1 and 1 degree of freedom respectively. This shows that the sample mean and sample variance are independent. This can also be shown by Basu's theorem, and in fact this property characterizes the normal distribution – for no other distribution are the sample mean and sample variance independent.[citation needed]
Distributions
The result for the distributions is written symbolically as
Both these random variables are proportional to the true but unknown variance σ2. Thus their ratio is does not depend on σ2 and, because they are statistically independent, the distribution of their ratio is given by
where F1,n − 1 is the F-distribution with 1 and n − 1 degrees of freedom (see also Student's t-distribution). The final step here is effectively the defintion of a random variable having the F-distribution.
Estimation of variance
To estimate the variance σ2, one estimator that is sometimes used is the maximum likelihood estimator of the variance of a normal distribution
Cochran's theorem shows that
and the properties of the chi-square distribution show that the expected value of is σ2(n − 1)/n.
Alternative formulation
The following version is often seen when considering linear regression.[citation needed] Suppose that Y∼Nn(0,σ2In) is a standard multivariate normal random vector (here In denotes the n-by-n identity matrix), and if are all n-by-n symmetric matrices with . Then, on defining ri = Rank(Ai), any one of the following conditions implies the other two:
- (thus the Ai are positive semidefinite)
- YTAiY is independent of YTAjY for
See also
- Cramér's theorem, on decomposing normal distribution
- Infinite divisibility (probability)
References
- ^ Cochran, W. G. (April 1934). "The distribution of quadratic forms in a normal system, with applications to the analysis of covariance". Mathematical Proceedings of the Cambridge Philosophical Society 30 (2): 178–191. doi:10.1017/S0305004100016595.
- ^ Bapat, R. B. (2000). Linear Algebra and Linear Models (Second ed.). Springer. ISBN 9780387988719.
Categories:- Statistical theorems
- Characterization of probability distributions
Wikimedia Foundation. 2010.