# Neyman-Pearson lemma

Neyman-Pearson lemma

In statistics, the Neyman-Pearson lemma states that when performing a hypothesis test between two point hypotheses "H"0: "θ"="θ"0 and "H"1: "θ"="θ"1, then the likelihood-ratio test which rejects "H"0 in favour of "H"1 when

:$Lambda\left(x\right)=frac\left\{ L\left( heta _\left\{0\right\} mid x\right)\right\}\left\{ L \left( heta _\left\{1\right\} mid x\right)\right\} leq eta mbox\left\{ where \right\} P\left(Lambda\left(X\right)leq eta|H_0\right)=alpha$

is the most powerful test of size "α" for a threshold η. If the test is most powerful for all $heta_1 in Theta_1$, it is said to be uniformly most powerful (UMP) for alternatives in the set $Theta_1 ,$.

In practice, the likelihood ratio is often used directly to construct tests &mdash; see Likelihood-ratio test. However it can also be used to suggest particular test-statistics that might be of interest or to suggest simplified tests &mdash; for this one considers algebraic manipulation of the ratio to see if there are key statistics in it is related to the size of the ratio (i.e. whether a large statistic corresponds to a small ratio or to a large one).

Proof

If we define the rejection region of the null hypothesis, as $R_\left\{NP\right\}=\left\{ X: frac\left\{L\left( heta_\left\{0\right\},X\right)\right\}\left\{L\left( heta_\left\{1\right\},X\right)\right\} leq eta\right\}$ , and any other test will have a different rejection region that we define as $R_\left\{A\right\}$. Furthermore define the function of region, and parameter $P\left(R, heta\right)=int_\left\{R\right\} L\left( heta|x\right) dx,$ hence this is the probability of the data falling in region R, given parameter $heta$.

For both tests to have significance level $alpha$, it must be true that$alpha= P\left(R_\left\{NP\right\}, heta_\left\{0\right\}\right)=P\left(R_\left\{A\right\}, heta_\left\{0\right\}\right)$, however it is useful to break these down into integrals over distinct regions.

:$P\left(R_\left\{NP\right\} cap R_\left\{A\right\}, heta\right) + P\left(R_\left\{NP\right\} cap R_\left\{A\right\}^\left\{c\right\}, heta\right) = P\left(R_\left\{NP\right\}, heta\right)$and:$P\left(R_\left\{NP\right\} cap R_\left\{A\right\}, heta\right) + P\left(R_\left\{NP\right\}^\left\{c\right\} cap R_\left\{A\right\}, heta\right) = P\left(R_\left\{A\right\}, heta\right)$

Setting $heta= heta_\left\{0\right\}$ and equating the above two expression, yields that$P\left(R_\left\{NP\right\} cap R_\left\{A\right\}^\left\{c\right\}, heta_\left\{0\right\}\right) = P\left(R_\left\{NP\right\}^\left\{c\right\} cap R_\left\{A\right\}, heta_\left\{0\right\}\right)$

Comparing the power of the two tests, which are $P\left(R_\left\{NP\right\}, heta_\left\{1\right\}\right)$ and $P\left(R_\left\{A\right\}, heta_\left\{1\right\}\right)$ one can see that

:$P\left(R_\left\{NP\right\}, heta_\left\{1\right\}\right) geq P\left(R_\left\{A\right\}, heta_\left\{1\right\}\right) mbox\left\{ if, and only if, \right\}P\left(R_\left\{NP\right\} cap R_\left\{A\right\}^\left\{c\right\}, heta_\left\{1\right\}\right) geq P\left(R_\left\{NP\right\}^\left\{c\right\} cap R_\left\{A\right\}, heta_\left\{1\right\}\right)$.

Now by the definition of $R_\left\{NP\right\}$

:$P\left(R_\left\{NP\right\} cap R_\left\{A\right\}^\left\{c\right\}, heta_\left\{1\right\}\right)= int_\left\{R_\left\{NP\right\}cap R_\left\{A\right\}^\left\{c L\left( heta_\left\{1\right\}|x\right)dx geq frac\left\{1\right\}\left\{eta\right\} int_\left\{R_\left\{NP\right\}cap R_\left\{A\right\}^\left\{c L\left( heta_\left\{0\right\}|x\right)dx = frac\left\{1\right\}\left\{eta\right\}P\left(R_\left\{NP\right\} cap R_\left\{A\right\}^\left\{c\right\}, heta_\left\{0\right\}\right)$:$= frac\left\{1\right\}\left\{eta\right\}P\left(R_\left\{NP\right\}^\left\{c\right\} cap R_\left\{A\right\}, heta_\left\{0\right\}\right) = frac\left\{1\right\}\left\{eta\right\}int_\left\{R_\left\{NP\right\}^\left\{c\right\} cap R_\left\{A L\left( heta_\left\{0\right\}|x\right)dx geq int_\left\{R_\left\{NP\right\}^\left\{c\right\}cap R_\left\{A L\left( heta_\left\{1\right\}|x\right)dx = P\left(R_\left\{NP\right\}^\left\{c\right\} cap R_\left\{A\right\}, heta_\left\{1\right\}\right)$

Hence the inequality holds.

Example

Let $X_1,dots,X_n$ be a random sample from the $mathcal\left\{N\right\}\left(mu,sigma^2\right)$ distribution where the mean $mu$ is known, and suppose that we wish to test for $H_0:sigma^2=sigma_0^2$ against $H_1:sigma^2=sigma_1^2$.

The likelihood for this set of normally distributed data is

:$Lleft\left(sigma^2;mathbf\left\{x\right\} ight\right)propto left\left(sigma^2 ight\right)^\left\{-n/2\right\} expleft\left\{-frac\left\{sum_\left\{i=1\right\}^n left\left(x_i-mu ight\right)^2\right\}\left\{2sigma^2\right\} ight\right\}.$

We can compute the likelihood ratio to find the key statistic in this test and its effect on the test's outcome:

:$Lambda\left(mathbf\left\{x\right\}\right) = frac\left\{Lleft\left(sigma_1^2;mathbf\left\{x\right\} ight\right)\right\}\left\{Lleft\left(sigma_0^2;mathbf\left\{x\right\} ight\right)\right\} = left\left(frac\left\{sigma_1^2\right\}\left\{sigma_0^2\right\} ight\right)^\left\{-n/2\right\}expleft\left\{-frac\left\{1\right\}\left\{2\right\}\left(sigma_1^\left\{-2\right\}-sigma_0^\left\{-2\right\}\right)sum_\left\{i=1\right\}^n left\left(x_i-mu ight\right)^2 ight\right\}.$

This ratio only depends on the data through $sum_\left\{i=1\right\}^n left\left(x_i-mu ight\right)^2$. Therefore, by the Neyman-Pearson lemma, the most powerful test of this type of hypothesis for this data will depend only on $sum_\left\{i=1\right\}^n left\left(x_i-mu ight\right)^2$. Also, by inspection, we can see that if $sigma_1^2>sigma_0^2$, then $Lambda\left(mathbf\left\{x\right\}\right)$ is a decreasing function of $sum_\left\{i=1\right\}^n left\left(x_i-mu ight\right)^2$. So we should reject $H_0$ if $sum_\left\{i=1\right\}^n left\left(x_i-mu ight\right)^2$ is sufficiently small. The rejection threshold depends on the size of the test.

ee also

* Statistical power

References

* cite journal
title=On the Problem of the Most Efficient Tests of Statistical Hypotheses
author=Jerzy Neyman, Egon Pearson
journal=Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character
volume=231
year=1933
pages=289–337
doi=10.1098/rsta.1933.0009

* [http://cnx.org/content/m11548/latest/ cnx.org: Neyman-Pearson criterion]

* MIT OpenCourseWare lecture notes: [http://ocw.mit.edu/NR/rdonlyres/Mathematics/18-443Fall2003/18B765F6-A398-48BF-A893-49A4965DED98/0/lec19.pdf most powerful tests] , [http://ocw.mit.edu/NR/rdonlyres/Mathematics/18-443Fall2003/D6F12E47-A9A2-4FE0-AC3C-588B6A5EE5B6/0/lec20.pdf uniformly most powerful tests]

Wikimedia Foundation. 2010.

### Look at other dictionaries:

• Neyman-Pearson-Lemma — Das Neyman Pearson Lemma ist ein Satz der mathematischen Statistik, der eine Optimalitätsaussage über die Konstruktion eines Hypothesentests macht. Gegenstand des Neyman Pearson Lemmas ist das denkbar einfachste Szenario eines Hypothesentests:… …   Deutsch Wikipedia

• Neyman–Pearson lemma — In statistics, the Neyman Pearson lemma, named after Jerzy Neyman and Egon Pearson, states that when performing a hypothesis test between two point hypotheses H0: θ = θ0 and H1: θ = θ1, then the likelihood ratio test …   Wikipedia

• Egon Sharpe Pearson — (* 11. August 1895 in Hampstead; † 12. Juni 1980 London) war ein britischer Statistiker. Er ist der Sohn von Karl Pearson. Pearson folgte seinem Vater als Professor für Statistik am University College London. Er war Herausgeber der Zeitschrift… …   Deutsch Wikipedia

• Jerzy Neyman — (* 16. April 1894 in Bendery, Moldawien; † 5. August 1981 in Oakland, Kalifornien) war ein polnischer Mathematiker und Autor wichtiger statistischer Bücher. Das Neyman Pearson Lemma ist nach ihm benannt. Neyman in Warschau 1973 …   Deutsch Wikipedia

• Jerzy Neyman — Born April 16, 1894(1894 04 16) Bendery, Bessarabia, Imperial Russia Died August 5, 1981(1981 …   Wikipedia

• Egon Pearson — Egon Sharpe Pearson (* 11. August 1895 in Hampstead; † 12. Juni 1980 Midhurst) war ein britischer Statistiker. Er ist der Sohn von Karl Pearson. Pearson folgte seinem Vater als Professor für Statistik am University College London. Er war… …   Deutsch Wikipedia

• Karl Pearson — Infobox Scientist name = Karl Pearson |300px caption = Karl Pearson (né Carl Pearson) birth date = birth date|1857|3|27|mf=y birth place = Islington, London, England death date = death date and age|1936|4|27|1857|3|27|mf=y death place =… …   Wikipedia

• Egon Pearson — Egon Sharpe Pearson (Hampstead, 11 August 1895 – London, 12 June 1980) was the only son of Karl Pearson, and like his father, a leading British statistician. He went to Winchester School and Trinity College, Cambridge, and succeeded his father as …   Wikipedia

• Type I and type II errors — In statistics, the terms Type I error (also, α error, or false positive) and type II error (β error, or a false negative) are used to describe possible errors made in a statistical decision process. In 1928, Jerzy Neyman (1894 1981) and Egon… …   Wikipedia

• Founders of statistics — Statistics is the theory and application of mathematics to the scientific method including hypothesis generation, experimental design, sampling, data collection, data summarization, estimation, prediction and inference from those results to the… …   Wikipedia