Multinomial distribution

Multinomial distribution
Multinomial
parameters: n > 0 number of trials (integer)
p_1, \ldots, p_k event probabilities (Σpi = 1)
support: X_i \in \{0,\dots,n\}
\Sigma X_i = n\!
pmf: \frac{n!}{x_1!\cdots x_k!} p_1^{x_1} \cdots p_k^{x_k}
mean: E{Xi} = npi
variance: \textstyle{\mathrm{Var}}(X_i) = n p_i (1-p_i)
\textstyle {\mathrm{Cov}}(X_i,X_j) = - n p_i p_j~~(i\neq j)
mgf: \biggl( \sum_{i=1}^k p_i e^{t_i} \biggr)^n
pgf: \biggl( \sum_{i=1}^k p_i z_i \biggr)^n\text{ for }(z_1,\ldots,z_k)\in\mathbb{C}^k

In probability theory, the multinomial distribution is a generalization of the binomial distribution.

The binomial distribution is the probability distribution of the number of "successes" in n independent Bernoulli trials, with the same probability of "success" on each trial. In a multinomial distribution, the analog of the Bernoulli distribution is the categorical distribution, where each trial results in exactly one of some fixed finite number k of possible outcomes, with probabilities p1, ..., pk (so that pi ≥ 0 for i = 1, ..., k and \sum_{i=1}^k p_i = 1), and there are n independent trials. Then let the random variables Xi indicate the number of times outcome number i was observed over the n trials. The vector X = (X1, ..., Xk) follows a multinomial distribution with parameters n and p, where p = (p1, ..., pk).

Note that, in some fields, such as natural language processing, the categorical and multinomial distributions are conflated, and it is common to speak of a "multinomial distribution" when a categorical distribution is actually meant. This stems from the fact that it is sometimes convenient to express the outcome of a categorical distribution as a "1-of-K" vector (a vector with one element containing a 1 and all other elements containing a 0) rather than as an integer in the range 1 \dots K; in this form, a categorical distribution is equivalent to a multinomial distribution over a single observation.

Contents

Specification

Probability mass function

The probability mass function of the multinomial distribution is:

 \begin{align}
f(x_1,\ldots,x_k;n,p_1,\ldots,p_k) & {} = \Pr(X_1 = x_1\mbox{ and }\dots\mbox{ and }X_k = x_k) \\  \\
& {} = \begin{cases} { \displaystyle {n! \over x_1!\cdots x_k!}p_1^{x_1}\cdots p_k^{x_k}}, \quad &
\mbox{when } \sum_{i=1}^k x_i=n \\  \\
0 & \mbox{otherwise,} \end{cases}
\end{align}

for non-negative integers x1, ..., xk.

Visualization

As slices of generalized Pascal's triangle

Just like one can interpret the binomial distribution as (normalized) 1D slices of Pascal's triangle, so too can one interpret the multinomial distribution as 2D (triangular) slices of Pascal's pyramid, or 3D/4D/+ (pyramid-shaped) slices of higher-dimensional analogs of Pascal's triangle. This reveals an interpretation of the range of the distribution: discretized equilaterial "pyramids" in arbitrary dimension -- i.e. a simplex with a grid.

As polynomial coefficients

Similarly, just like one can interpret the binomial distribution as the polynomial coefficients of (px1 + (1 − p)x2)n when expanded, one can interpret the multinomial distribution as the coefficients of (p1x1 + p2x2 + p3x3 + ... + pkxk)n when expanded. (Note that just like the binomial distribution, the coefficients must sum to 1.) This is the origin of the name "multinomial distribution".

Properties

The expected number of times the outcome i was observed over n trials is

\operatorname{E}(X_i) = n p_i.\,

The covariance matrix is as follows. Each diagonal entry is the variance of a binomially distributed random variable, and is therefore

\operatorname{var}(X_i)=np_i(1-p_i).\,

The off-diagonal entries are the covariances:

\operatorname{cov}(X_i,X_j)=-np_i p_j\,

for i, j distinct.

All covariances are negative because for fixed n, an increase in one component of a multinomial vector requires a decrease in another component.

This is a k × k positive-semidefinite matrix of rank k − 1.

The off-diagonal entries of the corresponding correlation matrix are

\rho(X_i,X_j) = -\sqrt{\frac{p_i p_j}{ (1-p_i)(1-p_j)}}.

Note that the sample size drops out of this expression.

Each of the k components separately has a binomial distribution with parameters n and pi, for the appropriate value of the subscript i.

The support of the multinomial distribution is the set

\{(n_1,\dots,n_k)\in \mathbb{N}^{k}| n_1+\cdots+n_k=n\}.\,

Its number of elements is

{n+k-1 \choose k-1} = \left\langle \begin{matrix}n \\ k \end{matrix}\right\rangle,

the number of n-combinations of a multiset with k types, or multiset coefficient.

Example

In a recent three-way election for a large country, candidate A received 20% of the votes, candidate B received 30% of the votes, and candidate C received 50% of the votes. If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate C in the sample?

Note: Since we’re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample. Technically speaking this is sampling without replacement, so the correct distribution is the multivariate hypergeometric distribution, but the distributions converge as the population grows large.

 \Pr(A=1,B=2,C=3) = \frac{6!}{1! 2! 3!}(0.2^1) (0.3^2) (0.5^3) = 0.135

Sampling from a multinomial distribution

First, reorder the parameters p_1, \ldots p_k such that they are sorted in descending order (this is only to speed up computation and not strictly necessary). Now, for each trial, draw an auxiliary variable X from a uniform (0, 1) distribution. The resulting outcome is the component

j = \arg \min_{j'=1}^{k} \left( \sum_{i=1}^{j'} p_i \ge X \right).

This is a sample for the multinomial distribution with n = 1. A sum of independent repetitions of this experiment is a sample from a multinomial distribution with n equal to the number of such repetitions.

Related distributions

See also

References

  • Evans, Merran; Hastings, Nicholas; Peacock, Brian (2000). Statistical Distributions. New York: Wiley. pp. 134–136. ISBN 0-471-37124-6. 3rd ed.. 

Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • Multinomial Distribution — A distribution that shows the likelihood of the possible results of a experiment with repeated trials in which each trial can result in a specified number of outcomes that is greater than two. A multinomial distribution could show the results of… …   Investment dictionary

  • Negative multinomial distribution — notation: parameters: k0 ∈ N0 the number of failures before the experiment is stopped, p ∈ Rm m vector of “success” probabilities, p0 = 1 − (p1+…+pm) the probability of a “failure”. support …   Wikipedia

  • Multinomial — may refer to: Multinomial theorem, and the multinomial coefficient Multinomial distribution Multinomial logit Polynomial This disambiguation page lists mathematics articles associated with the same title. If an …   Wikipedia

  • Multinomial theorem — In mathematics, the multinomial theorem says how to write a power of a sum in terms of powers of the terms in that sum. It is the generalization of the binomial theorem to polynomials. Contents 1 Theorem 1.1 Number of multinomial coefficients 1.2 …   Wikipedia

  • Multinomial test — In statistics, the multinomial test is the test of the null hypothesis that the parameters of a multinomial distribution equal specified values. It is used for categorical data; see Read and Cressie[1]. We begin with a sample of N items each of… …   Wikipedia

  • multinomial — multinomial, iale, iaux [ myltinɔmjal, jo ] adj. • mil. XXe; de multi et (bi)nomial ♦ Math. Loi multinomiale : généralisation de la loi binomiale lorsqu une expérience a plus de deux résultats incompatibles. ● multinomial, multinomiale,… …   Encyclopédie Universelle

  • Multinomial logit — In statistics, economics, and genetics, a multinomial logit (MNL) model, also known as multinomial logistic regression, is a regression model which generalizes logistic regression by allowing more than two discrete outcomes. That is, it is a… …   Wikipedia

  • distribution — 1. The passage of the branches of arteries or nerves to the tissues and organs. 2. The area in which the branches of an artery or a nerve terminate, or the area supplied by such an artery or nerve. 3. The relative numbers of individuals in each… …   Medical dictionary

  • Dirichlet distribution — Several images of the probability density of the Dirichlet distribution when K=3 for various parameter vectors α. Clockwise from top left: α=(6, 2, 2), (3, 7, 5), (6, 2, 6), (2, 3, 4). In probability and… …   Wikipedia

  • Probability distribution — This article is about probability distribution. For generalized functions in mathematical analysis, see Distribution (mathematics). For other uses, see Distribution (disambiguation). In probability theory, a probability mass, probability density …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”