- Bayes factor
In
statistics , the use of Bayes factors is aBayesian alternative to classicalhypothesis testing .cite journal | author = Goodman S | title = Toward evidence-based medical statistics. 1: The P value fallacy | journal = Ann Intern Med | volume = 130 | issue = 12 | pages = 995–1004 | year = 1999 | pmid = 10383371] cite journal | author = Goodman S | title = Toward evidence-based medical statistics. 2: The Bayes factor | journal = Ann Intern Med | volume = 130 | issue = 12 | pages = 1005–13 | year = 1999 | pmid = 10383350]Definition
Given a model selection problem in which we have to choose between two models "M"1 and "M"2, on the basis of a
data vector "x". The Bayes factor "K" is given by:
where is called the
marginal likelihood for model "i". This is similar to alikelihood-ratio test , but instead of "maximizing" the likelihood, Bayesians "average" it over the parameters. Generally, the models "M"1 and "M"2 will be parametrized by vectors ofparameter s θ1 and θ2; thus "K" is given by:
The logarithm of "K" is sometimes called the weight of evidence given by "x" for M1 over M2, measured in
bit s, nats, or bans, according to whether the logarithm is taken to base 2, base "e", or base 10.Interpretation
A value of "K" > 1 means that the data indicate that "M"1 is more strongly supported by the data under consideration than "M"2. Note that classical
hypothesis testing gives one hypothesis (or model) preferred status (the 'null hypothesis'), and only considers evidence "against" it.Harold Jeffreys gave a scale for interpretation of "K": [H. Jeffreys, "The Theory of Probability" (3e), Oxford (1961); p. 432]:
The second column gives the corresponding weights of evidence in
deciban s (tenths of a power of 10);bit s are added in the third column for clarity. According toI. J. Good a change in a weight of evidence of 1 deciban or 1/3 of a bit (i.e. a change in an odds ratio from evens to about 5:4) is about as finely ashuman s can reasonably perceive their degree of belief in a hypothesis in everyday use.Fact|date=June 2008The use of Bayes factors or classical hypothesis testing takes place in the context of
inference rather than decision-making under uncertainty. That is, we merely wish to find out which hypothesis is true, rather than actually making a decision on the basis of this information.Frequentist statistics draws a strong distinction between these two because classical hypothesis tests are not coherent in the Bayesian sense. Bayesian procedures, including Bayes factors, are coherent, so there is no need to draw such a distinction. Inference is then simply regarded as a special case of decision-making under uncertainty in which the resulting action is to report a value. For decision-making, Bayesian statisticians might use a Bayes factor combined with aprior distribution and aloss function associated with making the wrong choice. In an inference context the loss function would take the form of ascoring rule . Use of a logarithmic score function for example, leads to the expectedutility taking the form of theKullback-Leibler divergence . If the logarithms are to the base 2 this is equivalent to Shannon information.Example
Suppose we have a
random variable which produces either a success or a failure. We want to compare a model "M"1 where the probability of success is "q" = ½, and another model "M"2 where "q" is completely unknown and we take aprior distribution for "q" which is uniform on [0,1] . We take a sample of 200, and find 115 successes and 85 failures. The likelihood can be calculated according to thebinomial distribution ::
So we have
:
but
: The ratio is then 1.197..., which is "barely worth mentioning" even if it points very slightly towards "M"1.
This is not the same as a classical likelihood ratio test, which would have found the
maximum likelihood estimate for "q", namely 115⁄200 = 0.575, and from that get a ratio of 0.1045..., and so pointing towards "M"2. Alternatively, Edwards's "exchange rate" of two units of likelihood per degree of freedom suggests that is preferable (just) to , as and : the extra likelihood compensates for the unknown parameter in .A
frequentist hypothesis test of (here considered as anull hypothesis ) would have produced a more dramatic result, saying that "M"1 could be rejected at the 5% significance level, since the probability of getting 115 or more successes from a sample of 200 if "q" = ½ is 0.0200..., and as a two-tailed test of getting a figure as extreme as or more extreme than 115 is 0.0400... Note that 115 is more than two standard deviations away from 100."M"2 is a more complex model than "M"1 because it has a free parameter which allows it to model the data more closely. The ability of Bayes factors to take this into account is a reason why
Bayesian inference has been put forward as a theoretical justification for and generalisation ofOccam's razor , reducingType I error s.ee also
*
Bayesian model comparison *
Marginal likelihood References
External links
* [http://www.cs.ucsd.edu/users/goguen/courses/275f00/stat.html Bayesian critique of classical hypothesis testing]
* [http://ourworld.compuserve.com/homepages/rajm/jspib.htm Why should clinicians care about Bayesian methods?]
* [http://pcl.missouri.edu/bayesfactor Web application to calculate Bayes factors for t-tests]
Wikimedia Foundation. 2010.