Tukey's test

Tukey's test

Tukey's test, named after John Tukey, is a statistical test generally used in conjunction with an ANOVA to find which means are significantly different from one another. It compares all possible pairs of means, and is based on a "studentized range" distribution "q" (this distribution is similar to the distribution of "t" from the t-test).Linton, L.R., Harder, L.D. (2007) Biology 315 - Quantitative Biology Lecture Notes. University of Calgary, Calgary, AB]

The test compares the means of every treatment to the means of every other treatment, and identifies where the difference between two means is greater than the standard error would be expected to allow.

Assumptions of Tukey's test

#The observations being tested are independent
#The means are from normally distributed populations

The test statistic

Tukey's test is based on a formula very similar to that of the t-test. In fact, Tukey's test is essentially a t-test, except that it corrects for experiment-wise error rate (when there are multiple comparisons being made, the probability of making a type I error increases - Tukey's test corrects for that, and is thus more suitable for multiple comparisons than doing a number of t-tests would be).

The formula for Tukey's test is:

: q_s = frac{Y_A - Y_B}{SE},

where "Y"A is the larger of the two means being compared, "Y"B is the smaller of the two means being compared, and SE is the standard error of the data in question.

This qs value can then be compared to a q value from the "studentized range" distribution. If the qs value is "larger" than the qcritical value obtained from the distribution, the two means are said to be significantly different.

Since the null hypothesis for Tukey's test states that all means being compared are from the same population (ie. μ1 = μ2 = μ3 = ... = μ"n"), the means should be normally distributed (according to the central limit theorem). This gives rise to the normality assumption of Tukey's test.

The "q"-distribution

Tukey's test is based on the comparison of two samples from the same population. From the first sample, the range (calculated by subtracting the smallest observation from the largest, or scriptstyle ext{range}, =, max_i(Y_i), -, min_i(Y_i), where "Y""i" represents all of the observations) is calculated, and from the second sample, the standard deviation is calculated. The "studentized range" ratio is then calculated:

: q = frac{ ext{range{s},

where "q" = "studentized range", and "s" = standard deviation of the second sample.

This value of "q" is the basis of the critical value of "q", based on three factors:
#α (the Type I error rate, or the probability of rejecting a true null hypothesis)
#"n" (the number of degrees of freedom in the first sample (the one from which range was calculated))
#"v" (the number of degrees of freedom in the second sample (the one from which "s" was calculated))

Order of comparisons

If there are a set of means ("A", "B", "C", "D"), which can be ranked in the order "A" > "B" > "C" > "D", not all possible comparisons need be tested using Tukey's test. To avoid redundancy, one starts by comparing the largest mean ("A") with the smallest mean ("D"). If the "q""s" value for the comparison of means "A" and "D" is less than the "q" value from the distribution, the null hypothesis is accepted, and the means are said have no statistically significant difference between them. Since there is no difference between the two means that have the largest difference, comparing any two means that have a smaller difference is futile. As a result, no other comparisons need to be made.

Overall, it is important when doing a Tukey's test to always start by comparing the largest mean to the smallest mean, and then the largest mean with the next smallest, etc., until the largest mean has been compared to all other means (or until no difference is found). After this, compare the second largest mean with the smallest mean, and then the next smallest, and so on. Once again, if two means are found to have no statistically significant difference, do "not" compare any of the means between them.

References


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Tukey-Kramer method — The Tukey method (also known as Tukey s Honest Significance Test), named for John Tukey, is a single step multiple comparison procedure which applies simultaneously to the set of all pairwise comparisons:mu i mu jThe confidence coefficient for… …   Wikipedia

  • Test (statistique) — Pour les articles homonymes, voir Test. En statistiques, un test d hypothèse est une démarche consistant à rejeter ou à ne pas rejeter (rarement accepter) une hypothèse statistique, appelée hypothèse nulle, en fonction d un jeu de données… …   Wikipédia en Français

  • John Tukey — John Wilder Tukey Naissance 16 juin 1915 New Bedford, au Massachusetts (États Unis) Décès 26 juillet 2000 (à 85 ans) New Brunswick, au New Jersey (États Unis) Nationalité …   Wikipédia en Français

  • John Tukey — Infobox Scientist name = John Tukey caption = John Wilder Tukey birth date = birth date|1915|6|16 birth place = New Bedford, Massachusetts, USA death date = death date and age|2000|7|26|1915|6|16 death place = New Brunswick, New Jersey residence …   Wikipedia

  • Post-Hoc-Test — Post Hoc Tests sind Signifikanztests aus der mathematischen Statistik. Mit der einfaktoriellen ANOVA, dem Kruskal Wallis Test oder dem Median Test wird nur festgestellt, dass es in einer Gruppe von Mittelwerten signifikante Unterschiede gibt. Die …   Deutsch Wikipedia

  • Siegel-Tukey test — In statistics, the Siegel Tukey test is a non parametric test, which applies to data measured at least on an ordinal scale. It tests for the differences in scale between the two groups. It is named after Sidney Siegel and John Tukey.It is used to …   Wikipedia

  • Kolmogorov-Smirnov test — In statistics, the Kolmogorov ndash;Smirnov test (also called the K S test for brevity) is a form of minimum distance estimation used as a nonparametric test of equality of one dimensional probability distributions used to compare a sample with a …   Wikipedia

  • Nicht-parametrischer Test — Der Zweig der Statistik, der als parameterfreie Statistik bekannt ist, beschäftigt sich mit parameterfreien statistischen Modellen und parameterfreien statistischen Tests. Andere gebräuchliche Bezeichnungen sind nicht parametrische Statistik oder …   Deutsch Wikipedia

  • Bonferroni Test — A type of multiple comparison test used in statistical analysis. When an experimenter performs enough tests, he or she will eventually end up with a result that shows statistical significance, even if there is none. If a particular test yields… …   Investment dictionary

  • Nemenyi test — In statistics, the Nemenyi test is a post hoc test intended to find the groups of data that differ after a statistical test of multiple comparisons (such as the Friedman test) has rejected the null hypothesis that the performance of the… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”