Holm-Bonferroni method

Holm-Bonferroni method

In statistics, the Holm-Bonferroni method [Holm, S (1979): "A simple sequentially rejective multiple test procedure", "Scandinavian Journal of Statistics", 6:65-70] performs more than one hypothesis test simultaneously. It is named after Sture Holm and Carlo Emilio Bonferroni.

Suppose there are "k" hypotheses to be tested and the overall type 1 error rate is α. Start by ordering the p-values and comparing the smallest p-value to α/"k". If that p-value is less than α/"k", then reject that hypothesis and start all over with the same α and test the remaining "k" - 1 hypothesis, i.e. order the "k" - 1 remaining p-values and compare the smallest one to α/("k" - 1). Continue doing this until the hypothesis with the smallest p-value cannot be rejected. At that point, stop and accept all hypotheses that have not been rejected at previous steps.

Here is an example. Four hypotheses are tested with α = 0.05. The four unadjusted p-values are 0.01, 0.03, 0.04, and 0.005. The smallest of these is 0.005. Since this is less than 0.05/4, hypothesis four is rejected. The next smallest p-value is 0.01, which is smaller than 0.05/3. So, hypothesis one is also rejected. The next smallest p-value is 0.03. This is not smaller than 0.05/2. Therefore, hypotheses one and four are rejected while hypotheses two and three are not rejected.

The Holm-Bonferroni method is an example of a closed test procedure [Marcus R, Peritz E, Gabriel KR (1976): "On closed testing procedures with special reference to ordered analysis of variance", "Biometrika" 63: 655-660] . As such, it controls the familywise error rate for all the "k" hypotheses at level α in the strong sense. Each intersection is tested using the simple Bonferroni test.

It is also possible to define a weighted version. Let "p"1,..., "p""k" be the unadjusted p-values and let "w"1,..., "w""k"be a set of corresponding positive weights that add to 1. Without loss of generality, assume the p-values and the weights are all ordered such that "p"1/"w"1 ≤ "p"2/"w"2 ≤ ... ≤ "p""k"/"w""k". The adjusted p-value for the first hypothesis is "q"1 = min{1,"p"1/"w"1}. Inductively, define the adjusted p-value for hypothesis "i" by "q""i"=min{1,max{"q""i"-1,("w""i" + ... + "w""k")×"p""i"/"w""i". A hypothesis is rejected at level α if and only if its adjusted p-value is less than α. In the earlier example using equal weights, the adjusted p-values are 0.03, 0.06, 0.06, and 0.02. This is another way to see that using α = 0.05, only hypotheses one and four are rejected by this procedure.

References

ee also

* Multiple comparisons


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Bonferroni correction — In statistics, the Bonferroni correction states that if an experimenter is testing n dependent or independent hypotheses on a set of data, then the statistical significance level that should be used for each hypothesis separately is 1/ n times… …   Wikipedia

  • Multiple comparisons — In statistics, the multiple comparisons or multiple testing problem occurs when one considers a set of statistical inferences simultaneously.[1] Errors in inference, including confidence intervals that fail to include their corresponding… …   Wikipedia

  • Closed testing procedure — In statistics, the closed testing procedure[1] is a general method for performing more than one hypothesis test simultaneously. Contents 1 The closed testing principle 2 Example 3 Special cases …   Wikipedia

  • List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… …   Wikipedia

  • Familywise error rate — In statistics, familywise error rate (FWER) is the probability of making one or more false discoveries, or type I errors among all the hypotheses when performing multiple pairwise tests [Shaffer J. P. Multiple Hypothesis Testing, Annual Review of …   Wikipedia

  • Multiple testing correction — refers to re calculating probabilities obtained from a statistical test which was repeated multiple times. Different ways of recalculating include the: *Bonferroni correction *Holm Bonferroni method *Westfall Young step down approach of… …   Wikipedia

  • List of mathematics articles (H) — NOTOC H H cobordism H derivative H index H infinity methods in control theory H relation H space H theorem H tree Haag s theorem Haagerup property Haaland equation Haar measure Haar wavelet Haboush s theorem Hackenbush Hadamard code Hadamard… …   Wikipedia

  • Configural frequency analysis — (CFA) (Lienert, 1969) is a method of exploratory data analysis. The goal of a configural frequency analysis is to detect patterns in the data that occur significantly more (such patterns are called Types) or significantly less often (such… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”