- Sufficiency (statistics)
In

statistics ,**sufficiency**is the property possessed by astatistic , with respect to aparameter , "when no other statistic which can be calculated from the same sample provides any additional information as to the value of the parameter"cite journal

last = Fisher

first = Ronald

authorlink = Ronald Fisher

date =

year = 1922

title = On the Mathematical Foundations Of Theoretical Statistics.

journal = Phil. Trans. R. Soc. Lond. A

volume = 222

pages = 309–68

language = English

doi = 10.1098/rsta.1922.0009 ]This concept was due to Sir Ronald Fisher, and is equivalent to the most general statement of the above that, conditional on the value of a sufficient statistic, the distributions of samples drawn are independent of the underlying parameter(s) the statistic is sufficient for. Both the statistic and the underlying parameter can be vectors.

The concept has fallen out of favor in descriptive statistics because of the strong dependence on an assumption of the distributional form, but remains very important in theoretical work.cite journal

last = Stigler

first = Stephen

authorlink = Stephen Stigler

date =

year = 1973

month = Dec.

title = Studies in the History of Probability and Statistics. XXXII: Laplace, Fisher and the Discovery of the Concept of Sufficiency

journal = Biometrika

volume = 60

issue = 3

pages = 439–445

language = English

doi = 10.2307/2334992 ]**Mathematical definition**The concept is most general when defined as follows: a statistic "T"("X") is

**sufficient for underlying parameter "θ**" precisely if the conditionalprobability distribution of the data "X", given the statistic "T"("X"), is independent of the parameter "θ",cite book | last = Casella | first = George | coauthors = Berger, Roger L. | title = Statistical Inference, 2nd ed | publisher=Duxbury Press | date = 2002] i.e.:$Pr(X=x|T(X)=t,\; heta)\; =\; Pr(X=x|T(X)=t),\; ,$or in shorthand:$Pr(x|t,\; heta)\; =\; Pr(x|t).,$

**Example**As an example, the sample mean is sufficient for the mean (μ) of a

normal distribution with known variance. Once the sample mean is known, no further information about μ can be obtained from the sample itself.**Fisher-Neyman factorization theorem**"Fisher's factorization theorem" or "factorization criterion" provides a convenient

**characterization**of a sufficient statistic. If theprobability density function is ƒ_{"θ"}("x"), then "T" is sufficient for "θ"if and only if functions "g" and "h" can be found such that:$f\_\; heta(x)=h(x)\; ,\; g\_\; heta(T(x)),\; ,!$

i.e. the density ƒ can be factored into a product such that one factor, "h", does not depend on "θ" and the other factor, which does depend on "θ", depends on "x" only through "T"("x").

**Interpretation**An implication of the theorem is that when using likelihood-based inference, two sets of data yielding the same value for the sufficient statistic "T"("X") will always yield the same inferences about θ. By the factorization criterion, the likelihood's dependence on θ is only in conjunction with "T"("X"). As this is the same in both cases, the dependence on θ will be the same as well, leading to identical inferences.

**Proof for the continuous case**Due to Hogg and Craig (ISBN 978-0023557224). Let "X"

_{1}, "X"_{2}, ..., "X"_{n}, denote a random sample from a distribution having the pdf "f"("x",θ) for γ < θ < δ. Let "Y" = "u"("X"_{1}, "X"_{2}, ..., "X"_{n}) be a statistic whose pdf is "g"("y";θ). Then "Y" = "u"("X"_{1}, "X"_{2}, ..., "X"_{n}) is a sufficient statistic for θ if and only if, for some function "H",:$prod\_\{i=1\}^\{n\}\; f(x\_i;\; heta)\; =\; g\; left\; [u(x\_1,\; x\_2,\; dots,\; x\_n);\; heta\; ight]\; H(x\_1,\; x\_2,\; dots,\; x\_n).\; ,!$

First, suppose that:$prod\_\{i=1\}^\{n\}\; f(x\_i;\; heta)\; =\; g\; left\; [u(x\_1,\; x\_2,\; dots,\; x\_n);\; heta\; ight]\; H(x\_1,\; x\_2,\; dots,\; x\_n).\; ,!$

We shall make the transformation "y"

_{"i"}= "u"_{i}("x"_{1}, "x"_{2}, ..., "x"_{"n"}), for "i" = 1, ..., "n", having inverse functions "x"_{i}= "w"_{i}("y"_{1}, "y"_{2}, ..., "y"_{"n"}), for "i" = 1, ..., "n", andJacobian "J". Thus,:$prod\_\{i=1\}^\{n\}\; f\; left\; [\; w\_i(y\_1,\; y\_2,\; dots,\; y\_n);\; heta\; ight]\; =$

J| g(y; heta) H left [ w_1(y_1, y_2, dots, y_n), dots, w_n(y_1, y_2, dots, y_n) ight] .The left-hand member is the joint pdf "g"("y"

_{1}, "y"_{2}, ..., "y"_{"n"}; θ) of "Y"_{1}= "u"_{1}("X"_{1}, ..., "X"_{n}), ..., "Y"_{n}= "u"_{n}("X"_{1}, ..., "X"_{"n"}). In the right-hand member, $g(y\_1,dots,y\_n;\; heta)$ is the pdf of $Y\_1$, so that $H\; [\; w\_1,\; dots\; ,\; w\_n]\; |J|$ is the quotient of $g(y\_1,dots,y\_n;\; heta)$ and $g\_1(y\_1;\; heta)$; that is, it is the conditional pdf $h(y\_2,\; dots,\; y\_n\; |\; y\_1;\; heta)$ of $Y\_2,dots,Y\_n$ given $Y\_1=y\_1$.But $H(x\_1,x\_2,dots,x\_n)$, and thus $Hleft\; [w\_1(y\_1,dots,y\_n),\; dots,\; w\_n(y\_1,\; dots,\; y\_n))\; ight]$, was given not to depend upon $heta$. Since $heta$ was not introduced in the transformation and accordingly not in the Jacobian $J$, it follows that $h(y\_2,\; dots,\; y\_n\; |\; y\_1;\; heta)$ does not depend upon $heta$ and that $Y\_1$ is a sufficient statistics for $heta$.

The converse is proven by taking:

:$g(y\_1,dots,y\_n;\; heta)=g\_1(y\_1;\; heta)\; h(y\_2,\; dots,\; y\_n\; |\; y\_1),,$

where $h(y\_2,\; dots,\; y\_n\; |\; y\_1)$ does not depend upon $heta$ because $Y\_2\; ...\; Y\_n$ depend only upon $X\_1\; ...\; X\_n$ which are independent on $Theta$ when conditioned by $Y\_1$, a sufficient statistics by hypothesis. Now divide both members by the absolute value of the non-vanishing Jacobian $J$, and replace $y\_1,\; dots,\; y\_n$ by the functions $u\_1(x\_1,\; dots,\; x\_n),\; dots,\; u\_n(x\_1,dots,\; x\_n)$ in $x\_1,dots,\; x\_n$. This yields

:$frac\{gleft\; [\; u\_1(x\_1,\; dots,\; x\_n),\; dots,\; u\_n(x\_1,\; dots,\; x\_n);\; heta\; ight]\; \}$

is a function that does not depend upon $heta$.

**Proof for the discrete case**We use the shorthand notation to denote the joint probability of $(X,\; T(X))$ by $f\_\; heta(x,t)$. Since $T$ is a function of $X$, we have $f\_\; heta(x,t)\; =\; f\_\; heta(x)$ and thus:

:$f\_\; heta(x)\; =\; f\_\; heta(x,t)\; =\; f\_\{\; heta\; |\; t\}(x)\; f\_\; heta(t)$

with the last equality being true by the definition of

conditional probability distribution s. Thus $f\_\; heta(x)=a(x)\; b\_\; heta(t)$ with $a(x)\; =\; f\_\{\; heta\; |\; t\}(x)$ and $b(x)\; =\; f\_\; heta(t)$.Reciprocally, if $f\_\; heta(x)=a(x)\; b\_\; heta(t)$, we have

:$egin\{align\}f\_\; heta(t)\; =\; sum\; \_\{x\; :\; T(x)\; =\; t\}\; f\_\; heta(x,\; t)\; \backslash \; =\; sum\; \_\{x\; :\; T(x)\; =\; t\}\; f\_\; heta(x)\; \backslash \; =\; sum\; \_\{x\; :\; T(x)\; =\; t\}\; a(x)\; b\_\; heta(t)\; \backslash \; =\; left(\; sum\; \_\{x\; :\; T(x)\; =\; t\}\; a(x)\; ight)\; b\_\; heta(t)end\{align\}$

With the first equality by the definition of pdf for multiple variables, the second by the remark above, the third by hypothesis, and the fourth because the summation is not over $t$.

Thus, the conditional probability distribution is::$egin\{align\}f\_\{\; heta|t\}(x)\; =\; frac\{f\_\; heta(x,\; t)\}\{f\_\; heta(t)\}\; \backslash \; =\; frac\{f\_\; heta(x)\}\{f\_\; heta(t)\}\; \backslash \; =\; frac\{a(x)\; b\_\; heta(t)\}\{left(\; sum\; \_\{x\; :\; T(x)\; =\; t\}\; a(x)\; ight)\; b\_\; heta(t)\}\; \backslash \; =\; frac\{a(x)\}\{sum\; \_\{x\; :\; T(x)\; =\; t\}\; a(x)\}end\{align\}$

With the first equality by definition of conditional probability density, the second by the remark above, the third by the equality proven above, and the fourth by simplification. This expression does not depend on $heta$ and thus $T$ is a sufficient statistic. [

*cite web | url=http://cnx.org/content/m11480/1.6/ | title=The Fisher-Neyman Factorization Theorem*]**Minimal sufficiency**A sufficient statistic is

**minimal sufficient**if it can be represented as a function of any other sufficient statistic. In other words, "S"("X") is**minimal sufficient**if and only if

#"S"("X") is sufficient, and

#if "T"("X") is sufficient, then there exists a function "f" such that "S"("X") = "f"("T"("X")).Intuitively, a minimal sufficient statistic "most efficiently" captures all possible information about the parameter "θ".

A useful characterization of minimal sufficiency is that when the density "f"

_{θ}exists, "S"("X") is**minimal sufficient**if and only if:$frac\{f\_\; heta(x)\}\{f\_\; heta(y)\}$ is independent of "θ" :$Longleftrightarrow$ "S"("x") = "S"("y")This follows as a direct consequence from the Fisher's factorization theorem stated above.

A sufficient and complete statistic is necessarily minimal sufficient. A minimal sufficient statistic always exists; a complete statistic need not exist.

**Examples****Bernoulli distribution**If "X"

_{1}, ...., "X"_{"n"}are independent Bernoulli-distributed random variables with expected value "p", then the sum "T"("X") = "X"_{1}+ ... + "X"_{"n"}is a sufficient statistic for "p" (here 'success' corresponds to $X\_i=1$ and 'failure' to $X\_i=0$; so "T" is the total number of successes)This is seen by considering the joint probability distribution:

:$Pr(X=x)=P(X\_1=x\_1,X\_2=x\_2,ldots,X\_n=x\_n).$

Because the observations are independent, this can be written as

:$p^\{x\_1\}(1-p)^\{1-x\_1\}\; p^\{x\_2\}(1-p)^\{1-x\_2\}cdots\; p^\{x\_n\}(1-p)^\{1-x\_n\}\; ,!$

and, collecting powers of "p" and 1 − "p", gives

:$p^\{sum\; x\_i\}(1-p)^\{n-sum\; x\_i\}=p^\{T(x)\}(1-p)^\{n-T(x)\}\; ,!$

which satisfies the factorization criterion, with "h"("x")=1 being just a constant.

Note the crucial feature: the unknown parameter "p" interacts with the data "x" only via the statistic "T"("x") = Σ "x"

_{"i"}.**Uniform distribution**If "X"

_{1}, ...., "X"_{"n"}are independent and uniformly distributed on the interval [0,θ] , then "T"("X") = max("X"_{1}, ...., "X"_{"n"}) is sufficient for θ.To see this, consider the joint probability distribution:

:$Pr(X=x)=P(X\_1=x\_1,X\_2=x\_2,ldots,X\_n=x\_n).$

Because the observations are independent, this can be written as

:$frac\{operatorname\{H\}(\; heta-x\_1)\}\{\; heta\}cdotfrac\{operatorname\{H\}(\; heta-x\_2)\}\{\; heta\}cdot,cdots,cdotfrac\{operatorname\{H\}(\; heta-x\_n)\}\{\; heta\}\; ,!$

where H("x") is the

Heaviside step function . This may be written as:$frac\{operatorname\{H\}left(\; heta-max\_i\; \{,x\_i,\}\; ight)\}\{\; heta^n\},!$

which can be viewed as a function of only "θ" and max

_{i}("X"_{i}) = "T"("X"). This shows that the factorization criterion is satisfied, again where "h"("x")=1 is constant. Note that the parameter θ interacts with the data only through the data's maximum.**Poisson distribution**If "X"

_{1}, ...., "X"_{"n"}are independent and have aPoisson distribution with parameter "λ", then the sum "T"("X") = "X"_{1}+ ... + "X"_{"n"}is a sufficient statistic for "λ".To see this, consider the joint probability distribution:

:$Pr(X=x)=P(X\_1=x\_1,X\_2=x\_2,ldots,X\_n=x\_n).$

Because the observations are independent, this can be written as

:$\{e^\{-lambda\}\; lambda^\{x\_1\}\; over\; x\_1\; !\}\; cdot\; \{e^\{-lambda\}\; lambda^\{x\_2\}\; over\; x\_2\; !\}\; cdot,cdots,cdot\; \{e^\{-lambda\}\; lambda^\{x\_n\}\; over\; x\_n\; !\}\; ,!$

which may be written as

:$e^\{-nlambda\}\; lambda^\{(x\_1+x\_2+cdots+x\_n)\}\; cdot\; \{1\; over\; x\_1\; !\; x\_2\; !cdots\; x\_n\; !\; \}\; ,!$

which shows that the factorization criterion is satisfied, where "h"("x") is the reciprocal of the product of the factorials. Note the parameter λ interacts with the data only through its sum "T"("X").

**Rao-Blackwell theorem****Sufficiency**finds a useful application in theRao-Blackwell theorem . It states that if "g"("X") is any kind of estimator of "θ", then typically the conditional expectation of "g"("X") given "T"("X") is a better estimator of "θ", and is never worse. Sometimes one can very easily construct a very crude estimator "g"("X"), and then evaluate that conditional expected value to get an estimator that is in various senses optimal.**ee also***Completeness of a statistic

*Basu's theorem on independence of complete sufficient and ancillary statistics

*Lehman-Scheffe theorem : a complete sufficient estimator is the best estimator of its expectation**References**

*Wikimedia Foundation.
2010.*