Uniform distribution (discrete)

Uniform distribution (discrete)
discrete uniform
Probability mass function
Discrete uniform probability mass function for n = 5
n = 5 where n = b − a + 1
Cumulative distribution function
Discrete uniform cumulative distribution function for n = 5
parameters: a \in (\dots,-2,-1,0,1,2,\dots)\,
b \in (\dots,-2,-1,0,1,2,\dots), b \ge a
support: k \in \{a,a+1,\dots,b-1,b\}\,
    \frac{1}{n} & \mbox{for }a\le k \le b\ \\0 & \mbox{otherwise }
    0 & \mbox{for }k<a\\ \frac{\lfloor k \rfloor -a+1}{n} & \mbox{for }a \le k \le b \\1 & \mbox{for }k>b
mean: \frac{a+b}{2}\,
median: \frac{a+b}{2}\,
mode: N/A
variance: \frac{(b-a+1)^2-1}{12}=\frac{n^2-1}{12},
skewness: 0\,
ex.kurtosis: -\frac{6(n^2+1)}{5(n^2-1)}\,
entropy: \ln(n)\,
mgf: \frac{e^{at}-e^{(b+1)t}}{n(1-e^t)}\,
cf: \frac{e^{iat}-e^{i(b+1)t}}{n(1-e^{it})}

In probability theory and statistics, the discrete uniform distribution is a probability distribution whereby a finite number of equally spaced values are equally likely to be observed; every one of n values has equal probability 1/n. Another way of saying "discrete uniform distribution" would be "a known, finite number of equally spaced outcomes equally likely to happen."

If a random variable has any of n possible values k_1,k_2,\dots,k_n that are equally spaced and equally probable, then it has a discrete uniform distribution. The probability of any outcome ki  is 1 / n. A simple example of the discrete uniform distribution is throwing a fair die. The possible values of k are 1, 2, 3, 4, 5, 6; and each time the die is thrown, the probability of a given score is 1/6. If two dice are thrown and their values added, the uniform distribution no longer fits since the values from 2 to 12 do not have equal probabilities.

The cumulative distribution function (CDF) can be expressed in terms of a degenerate distribution as

F(k;a,b,n)={1\over n}\sum_{i=1}^n H(k-k_i)

where the Heaviside step function H(xx0) is the CDF of the degenerate distribution centered at x0, using the convention that H(0) = 1.


Estimation of maximum

This example is described by saying that a sample of k observations is obtained from a uniform distribution on the integers 1,2,\dots,N, with the problem being to estimate the unknown maximum N. This problem is commonly known as the German tank problem, following the application of maximum estimation to estimates of German tank production during World War II.

The UMVU estimator for the maximum is given by

\hat{N}=\frac{k+1}{k} m - 1 = m + \frac{m}{k} - 1

where m is the sample maximum and k is the sample size, sampling without replacement.[1][2] This can be seen as a very simple case of maximum spacing estimation.

The formula may be understood intuitively as:

"The sample maximum plus the average gap between observations in the sample",

the gap being added to compensate for the negative bias of the sample maximum as an estimator for the population maximum.[notes 1]

This has a variance of[1]

\frac{1}{k}\frac{(N-k)(N+1)}{(k+2)} \approx \frac{N^2}{k^2} \text{ for small samples } k \ll N

so a standard deviation of approximately N / k, the (population) average size of a gap between samples; compare \frac{m}{k} above.

The sample maximum is the maximum likelihood estimator for the population maximum, but, as discussed above, it is biased.

If samples are not numbered but are recognizable or markable, one can instead estimate population size via the capture-recapture method.

Random permutation

See rencontres numbers for an account of the probability distribution of the number of fixed points of a uniformly distributed random permutation.

See also


  1. ^ The sample maximum is never more than the population maximum, but can be less, hence it is a biased estimator: it will tend to underestimate the population maximum.


  1. ^ a b Johnson, Roger (1994), "Estimating the Size of a Population", Teaching Statistics 16 (2 (Summer)): 50, doi:10.1111/j.1467-9639.1994.tb00688.x 
  2. ^ Johnson, Roger (2006), "Estimating the Size of a Population", Getting the Best from Teaching Statistics, http://www.rsscse.org.uk/ts/gtb/johnson.pdf 

Wikimedia Foundation. 2010.

Look at other dictionaries:

  • Uniform distribution (continuous) — Uniform Probability density function Using maximum convention Cumulative distribution function …   Wikipedia

  • Uniform distribution — can refer to:Probability theory* discrete uniform distribution * continuous uniform distributionThey share the property that they have a finite range, and are weakly unimodal where any members of their support can be taken to be the mode. In… …   Wikipedia

  • Uniform Distribution — In statistics, a type of probability distribution in which all outcomes are equally likely. A deck of cards has a uniform distribution because the likelihood of drawing a heart, club, diamond or spade is equally likely. A coin also has a uniform… …   Investment dictionary

  • Circular uniform distribution — In probability theory and directional statistics, a circular uniform distribution is a probability distribution on the unit circle whose density is uniform for all angles. Contents 1 Description 2 Distribution of the mean 3 Entropy …   Wikipedia

  • Discrete phase-type distribution — The discrete phase type distribution is a probability distribution that results from a system of one or more inter related geometric distributions occurring in sequence, or phases. The sequence in which each of the phases occur may itself be a… …   Wikipedia

  • Probability distribution — This article is about probability distribution. For generalized functions in mathematical analysis, see Distribution (mathematics). For other uses, see Distribution (disambiguation). In probability theory, a probability mass, probability density …   Wikipedia

  • Normal distribution — This article is about the univariate normal distribution. For normally distributed vectors, see Multivariate normal distribution. Probability density function The red line is the standard normal distribution Cumulative distribution function …   Wikipedia

  • Maximum entropy probability distribution — In statistics and information theory, a maximum entropy probability distribution is a probability distribution whose entropy is at least as great as that of all other members of a specified class of distributions. According to the principle of… …   Wikipedia

  • Exponential distribution — Not to be confused with the exponential families of probability distributions. Exponential Probability density function Cumulative distribution function para …   Wikipedia

  • Dirichlet distribution — Several images of the probability density of the Dirichlet distribution when K=3 for various parameter vectors α. Clockwise from top left: α=(6, 2, 2), (3, 7, 5), (6, 2, 6), (2, 3, 4). In probability and… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”