- Studentized residual
In
statistics , a studentized residual, named in honor ofWilliam Sealey Gosset , who wrote under the pseudonym "Student", is a residual adjusted by dividing it by an estimate of itsstandard deviation . Studentization of residuals is an important technique in the detection ofoutlier s.Errors versus residuals
It is very important to understand the difference between
errors and residuals in statistics . Consider the simplelinear regression model:
where the errors ε"i", "i" = 1, ..., "n", are independent and all have the same variance σ2. The residuals are not the true, and unobservable, errors, but rather are "estimates", based on the observable data, of the errors. When the method of least squares is used to estimate α0 and α1, then the residuals , unlike the errors , cannot be independent since they satisfy the two constraints
:
and
:
(Here is the "i"th error, and is the "i"th residual.) Moreover, the residuals, unlike the errors, do not all have the same variance: the variance decreases as the corresponding "x"-value gets farther from the average "x"-value. "The fact that the variances of the residuals differ, even though the variances of the true errors are all equal to each other, is the principal reason for the need for studentization."
How to studentize
For this simple model, the
design matrix is:
and the
hat matrix "H" is the matrix of theorthogonal projection onto the column space of the design matrix::
The "leverage" "h""ii" is the "i"th diagonal entry in the hat matrix. The variance of the "i"th residual is
:
The corresponding studentized residual is then
:
where is an appropriate estimate of σ.
Internal and external studentization
The estimate of σ2 is
:
where "m" is the number of parameters in the model (2 in our example).But it is desirable to exclude the "i"th observation from the process of estimating the variance when one is considering whether the "i"th case may be an outlier. Consequently one may use the estimate
:
based on all but the "i"th case. If the latter estimate is used, "excluding" the "i"th case, then the residual is said to be "externally studentized"; if the former is used, "including" the "i"th case, then it is "internally studentized".
If the errors are independent and normally distributed with
expected value 0 and variance σ2, then theprobability distribution of the "i"th externally studentized residual is aStudent's t-distribution with "n" − "m" − 1 degrees of freedom, and can range from to .On the other hand, the internally studentized residuals are in the range , where r.d.f. is the number of residual degrees of freedom, namely "n" − "m". If "i.s.r." represents the internally studentized residual, and again assuming that the errors are independent identically distributed Gaussian variables, then
:
where "t" is distributed as
Student's t-distribution with r.d.f. − 1 degrees of freedom. In fact, this implies that i.s.r.2/r.d.f. follows thebeta distribution "B"(1/2,(r.d.f. − 1)/2). When r.d.f. = 3, the internally studentized residuals are uniformly distributed between and .If there is only one residual degree of freedom, the above formula for the distribution of internally studentized residuals doesn't apply. In this case, the i.s.r.'s are all either +1 or −1, with 50% chance for each.
The standard deviation of the distribution of internally studentized residuals is always 1, but this does not imply that the standard deviation of all the i.s.r.'s of a particular experiment is 1.
References
* "Residuals and Influence in Regression", R. Dennis Cook, New York :
Chapman and Hall , 1982.
Wikimedia Foundation. 2010.