(i.e., all errors have the same variance; that is "homoscedasticity"), and
*for "i" ≠ "j"; that is "uncorrelatedness." A linear estimator of "β" "j" is a linear combination
:
in which the coefficients "cij" are not allowed to depend on the earlier coefficients "β", since those are not observable, but are allowed to depend on "X", since this data is observable, and whose expected value remains "β" "j" even if the values of "X" change. (The dependence of the coefficients on "X" is typically nonlinear; the estimator is linear in "Y" and hence in "ε" which is random; that is why this is "linear" regression.) The estimator is unbiased iff
:
Now, let be some linear combination of the coefficients. Then the mean squared error of the corresponding estimation is defined as
:
i.e., it is the expectation of the square of the difference between the estimator and the parameter to be estimated. (The mean squared error of an estimator coincides with the estimator's variance if the estimator is unbiased; for biased estimators the mean squared error is the sum of the variance and the square of the bias.) A best linear unbiased estimator of "β" is the one with the smallest mean squared error for every linear combination "λ". This is equivalent to the condition that
:
is a positive semi-definite matrix for every other linear unbiased estimator .
The ordinary least squares estimator (OLS) is the function
:
of "Y" and "X" that minimizes the sum of squares of residuals
:
(It is easy to confuse the concept of "error" introduced early in this article, with this concept of "residual". For an account of the differences and the relationship between them, see errors and residuals in statistics).
The theorem now states that the OLS estimator is a BLUE. The main idea of the proof is that the least-squares estimator isuncorrelated with every linear unbiased estimator of zero, i.e., with every linear combination whose coefficients do not depend upon the unobservable β but whose expected value is always zero.
Generalized least squares estimator
The GLS or Aitken estimator extends the Gauss-Markov Theorem to the case where the error vector has a non-scalar covariance matrixndashthe Aitken estimator is also a BLUE. [A. C. Aitken, "On Least Squares and Linear Combinations of Observations", "Proceedings of the Royal Society of Edinburgh", 1935, vol. 55, pp. 42–48.]
ee also
*Independent and identically-distributed random variables
*Linear regression
*Measurement uncertainty
*Best linear unbiased prediction
Notes
References
* Plackett, R.L. (1950) "Some Theorems in Least Squares", "Biometrika" 37: 149–157
External links
* [http://members.aol.com/jeff570/g.html Earliest Known Uses of Some of the Words of Mathematics: G] (brief history and explanation of its name)
* [http://www.xycoon.com/ols1.htm Proof of the Gauss Markov theorem for multiple linear regression] (makes use of matrix algebra)
* [http://emlab.berkeley.edu/GMTheorem/index.html A Proof of the Gauss Markov theorem using geometry]