Pivotal quantity

Pivotal quantity

In statistics, a pivotal quantity is a function of observations whose distribution does not depend on unknown parameters.

More formally, given an independent and identically distributed sample X = (X_1,X_2,ldots,X_n) from a distribution with parameter heta, a function g is a pivotal quantity if the distribution of g(X, heta) is independent of heta .

It is relatively easy to construct pivots for location and scale parameters: for the former we form differences, for the latter ratios.

Pivotal quantities provide one method of constructing confidence intervals, and use of pivotal quantities improves performance of the bootstrap.

Example 1

Given n independent, identically distributed (i.i.d.) observations X = (X_1, X_2, ldots, X_n) from the normal distribution with unknown mean mu and variance sigma^2, a pivotal quantity can be obtained from the function:: g(x,X) = frac{x - overline{X{s} where : overline{X} = frac{1}{n}sum_{i=1}^n{X_i} and: s^2 = frac{1}{n-1}sum_{i=1}^n{(X_i - overline{X})^2} are unbiased estimates of mu and sigma^2, respectively. The function g(x,X) is the Student's t-statistic for a new value x, to be drawn from the same population as the already observed set of values X.

Using x=mu the function g(mu,X) becomes a pivotal quantity, which is also distributed by the Student's t-distribution with u = n-1 degrees of freedom. As required, even though mu appears as an argument to the function g, the distribution of g(mu,X) does not depend on the parameters mu or sigma of the normal probability distribution that governs the observations X_1,ldots,X_n.

Example 2

In more complicated cases, it is impossible to construct exact pivots. However, having approximate pivots improves convergence to asymptotic normality.

Suppose a sample of size n of vectors (X_i,Y_i)' is taken from bivariate normal distribution with unknown correlation ho. An estimator of ho is the sample (Pearson, moment) correlation: r = frac{frac1{n-1} sum_{i=1}^n (X_i - overline{X})(Y_i - overline{Y})}{s_X s_Y} where s_X, s_Y are sample variances of X and Y. Being a U-statistic, r will have an asymptotically normal distribution::sqrt{n}frac{r- ho}{1- ho^2} Rightarrow N(0,1).However, a variance stabilizing transformation: z = m{tanh}^{-1} r = frac12 ln frac{1+r}{1-r}known as Fisher's z transformation of the correlation coefficient allows to make the distribution of z asymptotically independent of unknown parameters::sqrt{n}(z-zeta) Rightarrow N(0,1)where zeta = { m tanh}^{-1} ho is the corresponding population parameter. For finite samples sizes n, the random variable z will have distribution closer to normal than that of r. Even closer approximation to normality will be achieved by using the exact variance:V [z] = frac1{n-2}

References

Shao, J (2003) "Mathematical Statistics", Springer, New York. ISBN 978-0-387-95382-3 (Section 7.1)


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Confidence interval — This article is about the confidence interval. For Confidence distribution, see Confidence Distribution. In statistics, a confidence interval (CI) is a particular kind of interval estimate of a population parameter and is used to indicate the… …   Wikipedia

  • Фидуциальный вывод — (от лат. fides: вера, доверие), как разновидность статистического вывода, был впервые предложен сэром Р. Э. Фишером. Фидуциальный вывод может быть интерпретирован как попытка вычислить обратную вероятность без привлечения априорного… …   Википедия

  • Bootstrapping (statistics) — In statistics, bootstrapping is a modern, computer intensive, general purpose approach to statistical inference, falling within a broader class of resampling methods.Bootstrapping is the practice of estimating properties of an estimator (such as… …   Wikipedia

  • Normalization (statistics) — For other uses, see Standard score and Normalizing constant. In one usage in statistics, normalization is the process of isolating statistical error in repeated measured data. A normalization is sometimes based on a property. Quantile… …   Wikipedia

  • Fiducial inference — was a form of statistical inference put forward by R A Fisher in an attempt to perform inverse probability without prior probability distributions.A fiducial interval may be used instead of a confidence interval or a Bayesian credible interval in …   Wikipedia

  • Robust statistics — provides an alternative approach to classical statistical methods. The motivation is to produce estimators that are not unduly affected by small departures from model assumptions. Contents 1 Introduction 2 Examples of robust and non robust… …   Wikipedia

  • Student's t-distribution — Probability distribution name =Student s t type =density pdf cdf parameters = u > 0 degrees of freedom (real) support =x in ( infty; +infty)! pdf =frac{Gamma(frac{ u+1}{2})} {sqrt{ upi},Gamma(frac{ u}{2})} left(1+frac{x^2}{ u} ight)^{ (frac{… …   Wikipedia

  • Exponential distribution — Not to be confused with the exponential families of probability distributions. Exponential Probability density function Cumulative distribution function para …   Wikipedia

  • List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… …   Wikipedia

  • Pivot — may refer to: * Pivot, the fulcrum as part of a lever * Pivot joint, a kind of joint between bones in the body * Pivot turn, a dance moveIn mathematics: * Pivot element, the first element distinct from zero in a matrix in echelon form * Pivotal… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”