Stein's unbiased risk estimate
- Stein's unbiased risk estimate
In statistics, Stein's unbiased risk estimate (SURE) is an unbiased estimator of the mean-squared error of a given estimator, in a deterministic estimation scenario. In other words, it provides an indication of the accuracy of a given estimator. This is important since, in deterministic estimation, the true mean-squared error of an estimator generally depends on the value of the unknown parameter, and thus cannot be determined completely.
The technique is named after its discoverer, Charles Stein.[ cite journal|title=Estimation of the Mean of a Multivariate Normal Distribution|journal=The Annals of Statistics|date=Nov. 1981|first=Charles M.|last=Stein|coauthors=|volume=9|issue=6|pages=1135–1151|id= |url=http://links.jstor.org/sici?sici=0090-5364%28198111%299%3A6%3C1135%3AEOTMOA%3E2.0.CO%3B2-5|accessdate=2008-03-30|month=Nov|year=1981|doi=10.1214/aos/1176345632 ] ] Formal statement
Let be an unknown deterministic parameter and let be a measurement vector which is distributed normally with mean and covariance . Suppose is an estimator of from . Then, Stein's unbiased risk estimate is given by:where is the th component of the estimate, and is the Euclidean norm.
The importance of SURE is that it is an unbiased estimate of the mean-squared error (or squared error risk) of , i.e.:
Thus, minimizing SURE can be expected to minimize the MSE. Except for the first term in SURE, which is identical for all estimators, there is no dependence on the unknown parameter in the expression for SURE above. Thus, it can be manipulated (e.g., to determine optimal estimation settings) without knowledge of .
Applications
A standard application of SURE is to choose a parametric form for an estimator, and then optimize the values of the parameters to minimize the risk estimate. This technique has been applied in several settings. For example, a variant of the James-Stein estimator can be derived by finding the optimal shrinkage estimator. The technique has also been used by Donoho and Johnstone to determine the optimal shrinkage factor in a wavelet denoising setting. [ cite journal|title=Adapting to Unknown Smoothness via Wavelet Shrinkage|journal=Journal of the American Statistical Association|date=Dec. 1995|first=David L.|last=Donoho|coauthors=Iain M. Johnstone|volume=90|issue=432|pages=1200–1244|id= |url=http://links.jstor.org/sici?sici=0162-1459%28199512%2990%3A432%3C1200%3AATUSVW%3E2.0.CO%3B2-K|accessdate=2008-03-30|month=Dec|year=1995|doi=10.2307/2291512 ]
References
Wikimedia Foundation.
2010.
Look at other dictionaries:
Charles Stein (statistician) — For other people named Charles Stein, see Charles Stein (disambiguation). Charles M. Stein (born March 22, 1920), an American mathematical statistician, is emeritus professor of statistics at Stanford University. He received his Ph.D in 1947 at… … Wikipedia
Stein's example — Stein s example, sometimes referred to as Stein s phenomenon or Stein s paradox, is a surprising effect observed in decision theory and estimation theory. Simply stated, the example demonstrates that when three or more parameters are estimated… … Wikipedia
List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… … Wikipedia
List of mathematics articles (S) — NOTOC S S duality S matrix S plane S transform S unit S.O.S. Mathematics SA subgroup Saccheri quadrilateral Sacks spiral Sacred geometry Saddle node bifurcation Saddle point Saddle surface Sadleirian Professor of Pure Mathematics Safe prime Safe… … Wikipedia
Sure — or SURE may refer to: * sure as probability, see certainty * Sure (brand), the brand by Unilever * Sure, a telephone company operating in the British Crown dependencies * Sure, a Chilean based film company * Stein s unbiased risk estimate (SURE) … Wikipedia
Linear regression — Example of simple linear regression, which has one independent variable In statistics, linear regression is an approach to modeling the relationship between a scalar variable y and one or more explanatory variables denoted X. The case of one… … Wikipedia
Bayes estimator — In decision theory and estimation theory, a Bayes estimator is an estimator or decision rule that maximizes the posterior expected value of a utility function or minimizes the posterior expected value of a loss function (also called posterior… … Wikipedia
Regression toward the mean — In statistics, regression toward the mean (also known as regression to the mean) is the phenomenon that if a variable is extreme on its first measurement, it will tend to be closer to the average on a second measurement, and a fact that may… … Wikipedia
Kashmir — This article is about the geographical region of greater Kashmir. For other meanings, see Kashmir (disambiguation), or Cashmere. Kashmir (Balti: کشمیر; Poonchi/Chibhali: کشمیر; Dogri: कश्मीर; Kashmiri: कॅशीर, کٔشِیر; Shina: کشمیر; Uyghur:… … Wikipedia