- Good–Turing frequency estimation
Good–Turing frequency estimation is a statistical technique for predicting the probability of occurrence of objects belonging to an unknown number of species, given past observations of such objects and their species. (In drawing balls from an urn, the 'objects' would be balls and the 'species' would be the distinct colors of the balls (finite but unknown in number). After drawing R_{red} red balls, R_{black} black balls and R_{green} green balls, we would ask what is the probability of drawing a red ball, a black ball, a green ball or one of a previously unseen color.)
Historical background
Good–Turing frequency estimation was developed by
Alan Turing and his assistantI.J. Good duringWorld War II . (It was part of their efforts atBletchley Park to crack Germanciphers for theEnigma machine during the war.) Turing at first modeled the frequencies as abinomial distribution , but found it inaccurate. Good developed smoothing algorithms to improve the estimator's accuracy.The discovery was recognized as significant when published by Good in 1953, but the calculations were difficult so it was not used as widely as it might have been. [ [http://www.newswise.com/articles/view/501440/ Newsise: Scientists Explain and Improve Upon 'Enigmatic' Probability Formula] ]
The method even gained some literary fame due to the Robert Harris novel "Enigma".
In the 1990s,
Geoffrey Sampson worked withWilliam A. Gale ofAT&T , to create and implement a simplified and easier-to-use variant of the Good–Turing method [ [http://www.grsampson.net/RGoodTur.html Geoffrey Sampson: Good–Turing Frequency Estimation] ] described below.The method
Let us define some data structures and notation.
Assume we have observed "X" distinct species, numbered "x" = 1, ... , "X".
The frequency vector R_x gives the number of objects we have observed for species "x".
The frequency of frequencies vector N_r shows how many times the frequency "r" occurs in the vector R_x. For example N_1 is the number of species for which only 1 object was observed.
Note that the total number of objects observed, "N", can be found from N = sum r N_r.
The first step in the calculation is to find an estimate of the total probability of unseen objects.This estimate is p_0 = N_1 / N .
The next step is to find an estimate of probability for objects which were seen "r" times. This estimate is p_r = frac{(r+1) S(N_{r+1})}{N S(N_r)} (see also
empirical Bayes method ).The notation S( ) means the smoothed or adjusted value of the frequency shown in parenthesis. An overview of how to perform this smoothing will now be given.
We would like to make a plot of log N_r versus log r but this is problematic because for large "r" many N_r will be zero. Instead we plot log Z_r versus log r where "Z" is defined as
:Z_r = frac{N_r}{0.5(t-q)}
where "q", "r" and "t" are consecutive subscripts having N_q, N_r, N_t non-zero.
A
linear regression is then fit to the log-log plot. For small values of r we take S(N_r) = N_r(that is, no smoothing is performed), while for large values on "r", S(N_r) is read off theregression line. An automatic procedure (not described here) specifies at what point the switch from no smoothing to linear smoothing should take place.Code for the method is available in the public domain.
References
See also
*
Ewens sampling formula
*Pseudocount
Wikimedia Foundation. 2010.