- Sample size
The sample size of a
statistical sample is the number of observations that constitute it. It is typically denoted "n", a positiveinteger (natural number ).Typically, all else being equal, a larger sample size leads to increased precision in estimates of various properties of the population. This can be seen in such statistical rules as the
law of large numbers and thecentral limit theorem . Repeated measurements and replication of independent samples are often required inmeasurement andexperiment s to reach a desired precision.A typical example would be when a
statistician wishes to estimate thearithmetic mean of a continuous random variable (for example, the height of a person). Assuming that they have arandom sample with independent observations, then if the variability of the population (as measured by thestandard deviation σ) is known, then the standard error of the sample mean is given by the formula:::It is easy to show that as "n" becomes large, this variability becomes very small. This yields to more sensitive hypothesis tests with greater
statistical power and smallerconfidence intervals .Further examples
Central limit theorem
The
central limit theorem is a significant result which depends on sample size. It states that as the size of a sample of independent observations approaches infinity, provided data come from a distribution with finite variance, that the sampling distribution of the sample mean approaches a normal distribution.Estimating proportions
A typical statistical aim is to demonstrate with 95% certainty that the true value of a
parameter is within a distance "B" of theestimate : "B" is an error range that decreases with increasing sample size ("n"). The value of "B" generated is referred to as the 95% confidence interval.For example, a simple situation is estimating a proportion in a population. To do so, a statistician will estimate the bounds of a 95%
confidence interval for an unknown proportion.The
rule of thumb for (a maximum or 'conservative') "B" for a proportion derives from the fact theestimator of a proportion, , (where "X" is the number of 'positive' observations) has a (scaled)binomial distribution and is also a form of sample mean (from aBernoulli distribution [0,1] which has a maximumvariance of 0.25 forparameter "p" = 0.5). So, the sample mean "X"/"n" has maximum variance 0.25/"n". For sufficiently large "n" (usually this means that we need to have observed at least 10 positive and 10 negative responses), this distribution will be closely approximated by anormal distribution with the same mean and variance.Using this approximation, it can be shown that ~95% of this distribution's probability lies within 2 standard deviations of the mean. Because of this, an interval of the form
:
will form a 95% confidence interval for the true proportion.
If we require the
sampling error ε to be no larger than some bound B, we can solve the equation:
to give us
:
So, "n" = 100 <=> "B" = 10%, "n" = 400 <=> "B" = 5%, "n" = 1000 <=> "B" = ~3%, and "n" = 10000 <=> "B" = 1%. One sees these numbers quoted often in news reports of
opinion poll s and othersample survey s.Extension to other cases
In general, if a population mean is estimated using the sample mean from "n" observations from a distribution with variance σ², then if "n" is large enough (typically >30) the
central limit theorem can be applied to obtain an approximate 95% confidence interval of the form:If the
sampling error ε is required to be no larger than bound "B", as above, then:Note, if the
mean is to beestimate d using "P"parameter s that must first be estimated themselves from the same sample, then to preserve sufficient "degrees of freedom," the sample size should be at least "n" + "P".Required sample sizes for hypothesis tests
A common problem facing statisticians is calculating the sample size required to yield a certain power for a test, given a predetermined
Type I error rate α. A typical example for this is as follows:Let "X i ", "i" = 1, 2, ..., "n" be independent observations taken from a
normal distribution with mean μ and variance σ2 . Let us consider two hypotheses, anull hypothesis ::
and an alternative hypothesis:
:
for some 'smallest significant difference' μ* >0. This is the smallest value for which we care about observing a difference. Now, if we wish to (1) reject "H"0 with a probability of at least 1-β when "H"a is true (i.e. a power of 1-β), and (2) reject "H"0 with probability α when "H"a is true, then we need the following:
If "z"α is the upper α percentage point of the standard normal distribution, then
:
and so
: 'Reject "H"0 if our sample average () is more than
is a
decision rule which satisfies (2). (Note, this is a 1-tailed test)Now we wish for this to happen with a probability at least 1-β when "H"a is true. In this case, our sample average will come from a Normal distribution with mean μ*. Therefore we require
:
Through careful manipulation, this can be shown to happen when
:
where is the normal
cumulative distribution function .tratified sample size
With more complicated sampling techniques, such as
Stratified sampling , the sample can often be split up into sub-samples. Typically, if there are "k" such sub-samples (from "k" different strata) then each of them will have a sample size "ni", "i" = 1, 2, ..., "k". These "ni" must conform to the rule that "n"1 + "n"2 + ... + "n""k" = "n" (i.e. that the total sample size is given by the sum of the sub-sample sizes). Selecting these "ni" optimally can be done in various ways, using (for example) Neyman's optimal allocation.According to
Leslie Kish , [Kish, L. (1965), Survey Sampling, New York: Wiley.] there are many reasons to do this; that is to take sub-samples from distinct sub-populations or "strata" of the original population: to decrease variances of sample estimates, to use partly non-random methods, or to study strata individually. A useful, partly non-random method would be to sample individuals where easily accessible, but, where not, sample clusters to save travel costs.In general, for H strata, a weighted sample mean is: with
:
The weights, W(h), frequently, but not always, represent the proportions of the population elements in the strata, and W(h)=N(h)/N. For a fixed sample size, that is n=Sum{n(h)},
:
which can be made a minimum if the
sampling rate within each stratum is made proportional to the standard deviation within each stratum: .An "optimum allocation" is reached when the sampling rates within the strata are made directly proportional to the standard deviations within the strata and inversely proportional to the square roots of the costs per element within the strata::
or, more generally, when
:
ee also
*
Design of experiments
*Replication (statistics)
*Sampling (statistics)
*Statistical power
*Stratified Sampling
*Engineering response surface example underStepwise regression .References
External links
* [http://www.itl.nist.gov/div898/handbook/ppc/section3/ppc333.htm NIST: Selecting Sample Sizes]
* [http://ravenanalytics.com/Articles/Sample_Size_Calculations.htm Raven Analytics: Sample Size Calculations]
*ASTM – [http://www.astm.org/Standards/E122.htm E122-07: Standard Practice for Calculating Sample Size to Estimate, With Specified Precision, the Average for a Characteristic of a Lot or Process] (not free)
Wikimedia Foundation. 2010.