Likelihood-ratio test

Likelihood-ratio test

The likelihood ratio, often denoted by Lambda (the capital Greek letter lambda), is the ratio of the maximum probability of a result under two different hypotheses. A likelihood-ratio test is a statistical test for making a decision between two hypotheses based on the value of this ratio.

imple versus simple hypotheses

A statistical model is often a parametrized family of probability density functions or probability mass functions f(x; heta). A simple-vs-simple hypotheses test hypothesises single values of heta under both the null and alternative hypotheses::egin{align}H_0 &:& heta= heta_0\H_A &:& heta= heta_Aend{align}Note that under either hypothesis, the distribution of the data is fully specified; there are no unknown parameters to estimate. The likelihood ratio test statistic is ( [Cox, D. R. and Hinkley, D. V; "Theoretical Statistics", Chapman and Hall, 1974.] , Page 92)::Lambda = frac{ f(x; heta_A) }{ f(x; heta_0) },(some references may use the reciprocal as the definition). The likelihood ratio test rejects the null hypothesis H_0 if the ratio exceeds a critical value "c". That is, the decision rule has the form:

If Lambda ge c reject H_0.

If Lambda < c accept (or don't reject) H_0.

The critical value "c" is usually chosen to obtain a specified significance level alpha, through the relation:P_0(Lambda ge c) = alpha(if "x" is discrete, some randomization on the boundary may be needed). The Neyman-Pearson lemma states that this likelihood ratio test is the most powerful among all level-alpha tests for this problem.

Definition (maximum likelihood ratio test for composite hypotheses)

A null hypothesis is often stated by saying the parameter heta is in a specified subset Theta_0 of the parameter space Theta. The likelihood function is L( heta) = L( heta|x) = p(x| heta) = f_{ heta}(x) is a function of the parameter heta with x held fixed at the value that was actually observed, "i.e.", the data. The likelihood ratio is

: Lambda(x)=frac{sup{,L( hetamid x): hetainTheta_0,{sup{,L( hetamid x): hetainTheta,.

Many common test statistics such as the Z-test, the F-test, Pearson's chi-square test and the G-test can be phrased as log-likelihood ratios or approximations thereof.

Interpretation

Being a function of the data x, the LR is therefore a statistic. The likelihood-ratio test rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, "i.e.", on what probability of Type I error is considered tolerable ("Type I" errors consist of the rejection of a null hypothesis that is true).

The numerator corresponds to the maximum probability of an observed result under the null hypothesis. The denominator corresponds to the maximum probability of an observed result under the alternative hypothesis. Under certain regularity conditions, the numerator of this ratio is less than the denominator. The likelihood ratio under those conditions is between 0 and 1. Lower values of the likelihood ratio mean that the observed result was less likely to occur under the null hypothesis. Higher values mean that the observed result was more likely to occur under the null hypothesis.

Approximation

If the distribution of the likelihood ratio corresponding to a particular null and alternative hypothesis can be explicitly determined then it can directly be used to form decision regions (to accept/reject the null hypothesis). In most cases, however, the exact distribution of the likelihood ratio corresponding to specific hypotheses is very difficult to determine. A convenient result, though, says that as the sample size n approaches infty, the test statistic -2 log(Lambda) will be asymptotically chi^2 distributed with degrees of freedom equal to the difference in dimensionality of Theta and Theta_0. This means that for a great variety of hypotheses, a practitioner can take the likelihood ratio Lambda, algebraically manipulate Lambda into -2log(Lambda), compare the value of -2log(Lambda) given a particular result to the chi squared value corresponding to a desired statistical significance, and create a reasonable decision based on that comparison.

Examples

Medical

One example of a likelihood ratio would be the likelihood that a given test result would be expected in a patient with a certain disorder compared to the likelihood that same result would occur in a patient without the target disorder.

As another example, one can imagine that one is trying to figure out whether one is in line for tickets to a football game or to the opera (assuming that one cannot ask people which line one is in, that one does not see any signs, etcetera). The only thing that one is allowed to do is ask other people in line whether or not they like football. One estimates that 90% of people in the line for a football game like football, while 10% of people in the line for the opera like football. Then the likelihood ratio is computed as:

(Probability of liking football given that someone is in line for football game)/(Probability of liking football given that someone's in line for the opera) = .9/.1 = 9

The larger one's likelihood ratio, the higher the chance that one will be able to correctly infer whether one is at the football game or at the opera given the people's responses. In other words, if one's LR is large, one can be more confident in one's decision as to whether one in line for football tickets or not given that one only asked a limited number of people whether or not they liked football. For an infinite likelihood ratio, one would be 100% sure that one is in line for the football game after only asking one person, who said "yes."

Coin tossing

An example, in the case of Pearson's test, we might try to compare two coins to determine whether they have the same probability of coming up heads. Our observation can be put into a contingency table with rows corresponding to the coin and columns corresponding to heads or tails. The elements of the contingency table will be the number of times the coin for that row came up heads or tails. The contents of this table are our observation X.

Heads Tails
Coin 1 k_{1H} k_{1T}
Coin 2 k_{2H} k_{2T}
Here Theta consists of the parameters p_{1H}, p_{1T}, p_{2H}, and p_{2T}, which are the probability that coin 1 (2) comes up heads (tails). The hypothesis space H is defined by the usual constraints on a distribution, p_{ij} ge 0, p_{ij} le 1, and p_{iH} + p_{iT} = 1 . The null hypothesis H_0 is the sub-space where p_{1j} = p_{2j}. In all of these constraints, i = 1,2 and j = H,T.

Writing n_{ij} for the best values for p_{ij} under the hypothesis H, maximum likelihood is achieved with:n_{ij} = frac{k_{ij{k_{iH}+k_{iT.Writing m_{ij} for the best values for p_{ij} under the null hypothesis H_0, maximum likelihood is achieved with:m_{ij} = frac{k_{1j}+k_{2j{k_{1H}+k_{2H}+k_{1T}+k_{2T,which does not depend on the coin i.

The hypothesis and null hypothesis can be rewritten slightly so that they satisfy the constraints for the logarithm of the likelihood ratio to have the desired nice distribution. Since the constraint causes the two-dimensional H to be reduced to the one-dimensional H_0, the asymptotic distribution for the test will be chi^2(1), the chi^2 distribution with one degree of freedom.

For the general contingency table, we can write the log-likelihood ratio statistic as

:-2 log Lambda = 2sum_{i, j} k_{ij} log frac{n_{ij{m_{ij.

Criticism

Theoretical

Bayesian criticisms of classical likelihood ratio tests focus on two issues:
#the supremum function in the calculation of the likelihood ratio, saying that this takes no account of the uncertainty about θ and that using maximum likelihood estimates in this way can promote complicated alternative hypotheses with an excessive number of free parameters;
#testing the probability that the sample would produce a result as extreme "or more extreme" under the null hypothesis, saying that this bases the test on the probability of extreme events that did not happen. Instead they put forward methods such as Bayes factors, which explicitly take uncertainty about the parameters into account, and which are based on the evidence that did occur.

Practical

In medicine, the use of likelihood ratio tests has been promoted to assist in interpreting diagnostic testscite journal |author=Jaeschke R, Guyatt GH, Sackett DL |title=Users' guides to the medical literature. III. How to use an article about a diagnostic test. B. What are the results and will they help me in caring for my patients? The Evidence-Based Medicine Working Group |journal=JAMA |volume=271 |issue=9 |pages=703–7 |year=1994 |pmid=8309035 |doi=] . A large likelihood ratio, for example a value more than 10, helps rule in disease. A small likelihood ratio, for example a value less than 0.1, helps rule out diseasecite journal |author=McGee S |title=Simplifying likelihood ratios |journal=Journal of general internal medicine : official journal of the Society for Research and Education in Primary Care Internal Medicine |volume=17 |issue=8 |pages=646–9 |year=2002 |pmid=12213147 |doi=] . However, physicians rarely make these calculationscite journal |author=Reid MC, Lane DA, Feinstein AR |title=Academic calculations versus clinical judgments: practicing physicians' use of quantitative measures of test accuracy |journal=Am. J. Med. |volume=104 |issue=4 |pages=374–80 |year=1998 |pmid=9576412| doi = 10.1016/S0002-9343(98)00054-0 ] and sometimes make errors when they do attempt calculations.cite journal |author=Steurer J, Fischer JE, Bachmann LM, Koller M, ter Riet G |title=Communicating accuracy of tests to general practitioners: a controlled study |journal=BMJ |volume=324 |issue=7341 |pages=824–6 |year=2002 |pmid=11934776| doi = 10.1136/bmj.324.7341.824 ] A randomized controlled trial compared how well physicians interpreted diagnostic tests that were presented as either sensitivity and specificity, a likelihood ratio, or an inexact graphic of the likelihood ratio, found no difference in ability to interpret test results.cite journal |author=Puhan MA, Steurer J, Bachmann LM, ter Riet G |title=A randomized trial of ways to describe test accuracy: the effect on physicians' post-test probability estimates |journal=Ann. Intern. Med. |volume=143 |issue=3 |pages=184–9 |year=2005 |pmid=16061916 |doi=]

References

See also

* Likelihood function
* Score function
* Likelihood principle

External links

* [http://www.itl.nist.gov/div898/handbook/apr/section2/apr233.htm Practical application of Likelihood-ratio test described]
* [http://faculty.vassar.edu/lowry/clin2.html Vassar College's Likelihood Ratio Given Sensitivity/Specifity/Prevalence] Online Calculator


Wikimedia Foundation. 2010.

Игры ⚽ Нужно решить контрольную?

Look at other dictionaries:

  • Likelihood-Ratio-Test — Der Likelihood Quotienten Test oder Likelihood Ratio Test ist ein statistischer Test, der zu den typischen Hypothesentests in parametrischen Modellen gehört. Viele klassische Tests wie der F Test für den Varianzenquotienten oder der Zwei… …   Deutsch Wikipedia

  • likelihood ratio test — in statistics, a test using the ratio of the maximum value of the likelihood function from one statistical model to that from another model, a smaller ratio indicating a stronger relationship between the variables …   Medical dictionary

  • Likelihood-Quotienten-Test — Der Likelihood Quotienten Test oder Likelihood Ratio Test ist ein statistischer Test, der zu den typischen Hypothesentests in parametrischen Modellen gehört. Viele klassische Tests wie der F Test für den Varianzenquotienten oder der Zwei… …   Deutsch Wikipedia

  • Likelihood ratio — Der Likelihood Quotienten Test oder Likelihood Ratio Test ist ein statistischer Test, der zu den typischen Hypothesentests in parametrischen Modellen gehört. Viele klassische Tests wie der F Test für den Varianzenquotienten oder der Zwei… …   Deutsch Wikipedia

  • Likelihood ratio — Cet article peut être partiellement redondant avec Fonction de vraisemblance. Le likelihood ratio est le rapport entre la proportion de personnes souffrant d une maladie qui obtiennent lors un test de dépistage un résultat déterminé (p.e. positif …   Wikipédia en Français

  • Sequential probability ratio test — The sequential probability ratio test (SPRT) is a specific sequential hypothesis test, developed by Abraham Wald. [cite journal first=Abraham last=Wald title=Sequential Tests of Statistical Hypotheses journal=Annals of Mathematical Statistics… …   Wikipedia

  • likelihood ratio — 1. an index of diagnostic marker tests, the odds of a disease given a specified test value relative to the odds of the disease in the study population. It can be calculated for either a positive or a negative test, the former (LR+) being the… …   Medical dictionary

  • Monotone likelihood ratio — A monotonic likelihood ratio in distributions f(x) and g(x) The ratio of the density functions above is increasing in the parameter x, so f(x)/g(x) satisfies the monotone likelihood ratio property. In statistics, the monoto …   Wikipedia

  • Monotone likelihood ratio property — is a property of a family of probability distributions described by their probability density functions (PDFs). A family of density functions { f heta (x)} { hetain Theta} indexed by a parameter heta taking values in a set Theta is said to have… …   Wikipedia

  • Test de Khi-2 — Test du χ²  Pour la loi de probabilité, voir Loi du χ². Densité du χ² en fonction du nombre de degrés de liberté Le test du χ² (prononcer …   Wikipédia en Français

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”