Beta-binomial model

Beta-binomial model

In empirical Bayes methods, the Beta-binomial model is an analytic model where the likelihood function L(x| heta) is specifed by a binomial distribution

:L(x| heta) = operatorname{Bin}(x, heta),

::: = {nchoose x} heta^x(1- heta)^{n-x},

and the conjugate prior pi( heta|alpha,eta) is a Beta distribution

:pi( heta|alpha,eta) = mathrm{Beta}(alpha,eta),

::: = frac{1}{mathrm{B}(alpha,eta)} heta^{alpha-1}(1- heta)^{eta-1}.

The Beta-binomial is a two-dimensional multivariate Polya distribution, as the binomial and Beta distributions are special cases of the multinomial and Dirichlet distributions, respectively.

Derivation of the posterior and the marginal

It is convenient to reparameterize the distributions so that the expected mean of the prior is a single parameter: Let

:pi( heta|mu,M) = Beta(mu,M),

::: = frac{Gamma (M)}{Gamma(mu M)Gamma(M(1-mu))} heta^{Mmu-1}(1- heta)^{M(1-mu)-1}

where

::: mu = frac{alpha}{alpha+eta}, and M = alpha+eta,

so that

:::E( heta|mu,M) = mu, ::: Var( heta|mu,M) = frac{mu(1-mu)}{M+1}

The posterior distribution ho( heta|x) is also a beta distribution

: ho( heta|x) propto l(x| heta)pi( heta|mu,M)

::: = Beta(x+M mu , n-x+M(1- mu) ) ,

::: = frac{Gamma (M) }{Gamma(Mmu)Gamma(M(1-mu))}{nchoose x} heta^{x+Mmu-1}(1- heta)^{n-x+M(1-mu)-1},

while the marginal distribution m(x|mu, M) is given by

: m(x|mu,M) = int_{0}^{1} l(x| heta)pi( heta|mu, M) d heta

::= frac{Gamma (M) }{Gamma(Mmu)Gamma(M(1-mu))}{nchoose x} int_{0}^{1} heta^{x+Mmu-1}(1- heta)^{n-x+M(1-mu)-1} d heta

::= frac{Gamma (M) }{Gamma(Mmu)Gamma(M(1-mu))}{nchoose x} frac{Gamma (x+Mmu)Gamma(n-x+M(1-mu)) }{Gamma(n+M)}

Moment estimates

Because the marginal is a complex, non-linear function of Gamma and Digamma functions, it is quite difficult to obtain a marginal maximum likelihood estimate (MMLE) for the mean and variance. Instead, we use the method of iterated expectations to find the expected value of the marginal moments.

Let us write our model as a two-stage compund sampling model (for each event "i" out of "ni" possible events):

:: X_{i}| heta_{i} sim mathrm{Bin}(n_{i}, heta_{i}),

:: heta_{i} sim mathrm{Beta}|(mu,M), i.i.d,

We can find iterated moment estimates for the mean and variance using the moments for the distributions in the two-stage model:

::Eleft(frac{X}{n} ight) = Eleft [Eleft(frac{X}{n}| heta ight) ight] = E( heta) = mu

::mathrm{var}left(frac{X}{n} ight) = Eleft [mathrm{var}left(frac{X}{n}| heta ight) ight] + mathrm{var}left [Eleft(frac{X}{n}| heta ight) ight]

::: = Eleft [left(frac{1}{n} ight) heta(1- heta)|mu,M ight] + mathrm{var}left( heta|mu,M ight)

::: = frac{1}{n}left(mu(1-mu) ight)+ frac{n_{i}-1}{n_{ifrac{(mu(1-mu))}{M+1}

::: = frac{mu(1-mu)}{n}left(1+frac{n-1}{M+1} ight).

We now seek a point estimate ilde{ heta} as a weighted average of the sample estimate hat{ heta_{i and an estimate for hat{mu}. The sample estimate hat{ heta_{i is given by

::hat{ heta_{i = frac{x_{i{n_{i,

Therefore we need point estimates for mu and M. The estimated mean hat{mu} is given as a weighted average

::hat{mu} = frac{sum_{i=1}^{N} n_{i} hat{ heta_{i }{sum_{i=1}^{N} n_{i} } = frac{sum_{i=1}^{N} x_{i} }{sum_{i=1}^{N} n_{i} }

The hyperparameter M is obtained using the moment estimates for the variance of the two-stage model:

::s^{2} = frac{1}{N} sum_{i=1}^{N} mathrm{var}left(frac{x_{i{n_{i ight) = frac{1}{N} sum_{i=1}^{N} frac{hat{mu}(1-hat{mu})}{n_{i left [1+frac{n_{i}-1}{hat{M}+1} ight] =

::s^{2} = frac{N sum_{i=1}^{N} n_{i} (hat{ heta_{i - hat{mu})^{2} }{(N-1)sum_{i=1}^{N} n_{i} }

We can now solve for hat{M}:

::hat{M} = frac{hat{mu}(1-hat{mu}) - s^{2{s^{2} - frac{hat{mu}(1 - hat{mu})}{N}sum_{i=1}^{N} 1/n_{i} }.

Given our point estimates for the prior, we may now plug in these values to find a point estimate for the posterior

: ilde{ heta_{i = E(Theta|hat{mu},hat{M}) = frac{x_{i} + hat{M}hat{mu{n_{i}+hat{M = frac{hat{M{n_{i}+hat{M_{i}hat{mu} + frac{n_{i{n_{i}+hat{M_{i}frac{x_{i{n_{i

hrinkage factors

We may write the posterior estimate as a weighted average:

:: ilde{ heta_{i = hat{B_{ihat{mu} + (1-hat{B_{i)hat{ heta_{i

where hat{B_{i is called the "Shrinkage Factor".

::hat{B_{i = frac{hat{M_{i}{hat{M_{i+n_{i

Example

Maximum likelihood estimation

Maximum likelihood estimates from empirical data can be computed using general methods for fitting multinomial Polya distributions, methods for which are described in (Minka 2003).

Improved estimates: James-Stein estimator

ee also

* multivariate Polya distribution

External links

* [http://www.cs.ubc.ca/~murphyk/Teaching/Stat406-Spring07/reading/ebHandout.pdf Empirical Bayes for Beta-Binomial model]
* [http://it.stlawu.edu/~msch/biometrics/papers.htm Using the Beta-binomial distribution to assess performance of a biometric identification device]
* [http://www.emse.fr/g2i/publications/rapports/RR_2005-500-012.pdf Extended Beta-Binomial Model for Demand Forecasting of Multiple Slow-Moving Items with Low Consumption and Short Request History]
* [http://research.microsoft.com/~minka/software/fastfit/ Fastfit] contains Matlab code for fitting Beta-Binomial distributions (in the form of two-dimensional Polya distributions) to data.

References

* Minka, Thomas P. (2003). [http://research.microsoft.com/~minka/papers/dirichlet/ Estimating a Dirichlet distribution] . Microsoft Technical Report.


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Beta distribution — Probability distribution name =Beta| type =density pdf cdf parameters =alpha > 0 shape (real) eta > 0 shape (real) support =x in [0; 1] ! pdf =frac{x^{alpha 1}(1 x)^{eta 1 {mathrm{B}(alpha,eta)}! cdf =I x(alpha,eta)! mean… …   Wikipedia

  • Negative binomial distribution — Probability mass function The orange line represents the mean, which is equal to 10 in each of these plots; the green line shows the standard deviation. notation: parameters: r > 0 number of failures until the experiment is stopped (integer,… …   Wikipedia

  • Generalized linear model — In statistics, the generalized linear model (GLM) is a flexible generalization of ordinary least squares regression. It relates the random distribution of the measured variable of the experiment (the distribution function ) to the systematic (non …   Wikipedia

  • Mixture model — See also: Mixture distribution In statistics, a mixture model is a probabilistic model for representing the presence of sub populations within an overall population, without requiring that an observed data set should identify the sub population… …   Wikipedia

  • Probit model — In statistics, a probit model is a popular specification of a generalized linear model, using the probit link function. A probit regression is the application of this model to a given dataset. Probit models were introduced by Chester Ittner Bliss …   Wikipedia

  • List of mathematics articles (B) — NOTOC B B spline B* algebra B* search algorithm B,C,K,W system BA model Ba space Babuška Lax Milgram theorem Baby Monster group Baby step giant step Babylonian mathematics Babylonian numerals Bach tensor Bach s algorithm Bachmann–Howard ordinal… …   Wikipedia

  • Overdispersion — In statistics, overdispersion is the presence of greater variability (statistical dispersion) in a data set than would be expected based on a given simple statistical model. A common task in applied statistics is choosing a parametric model to… …   Wikipedia

  • List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… …   Wikipedia

  • Empirical Bayes method — In statistics, empirical Bayes methods are a class of methods which use empirical data to evaluate / approximate the conditional probability distributions that arise from Bayes theorem. These methods allow one to estimate quantities… …   Wikipedia

  • Multinomial distribution — Multinomial parameters: n > 0 number of trials (integer) event probabilities (Σpi = 1) support: pmf …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”