- Statistical significance
In

statistics , a result is called**statistically significant**if it is unlikely to have occurred bychance . "A statistically significant difference" simply means there is statistical evidence that there is a difference; it does not mean the difference is necessarily large, important, or significant in the common meaning of the word.The

**significance level**of a test is a traditionalfrequentist statistical hypothesis testing concept. In simple cases, it is defined as the probability of making a decision to reject the null hypothesis when thenull hypothesis is actually true (a decision known as aType I error , or "false positive determination"). The decision is often made using thep-value : if the p-value is less than the significance level, then the null hypothesis is rejected. The smaller the p-value, the more significant the result is said to be.In more complicated, but practically important cases, the significance level of a test is a probability such that the probablility of making a decision to reject the null hypothesis when the

null hypothesis is actually true is "no more than" the stated probability. This allows for those applications where the probability of deciding to reject may be much smaller than the significance level for some sets of assumptions encompassed within the null hypothesis.**Use in practice**The significance level is usually represented by the Greek symbol, α (alpha). Popular levels of significance are 5%, 1% and 0.1%. If a "test of significance" gives a p-value lower than the α-level, the null hypothesis is rejected. Such results are informally referred to as 'statistically significant'. For example, if someone argues that "there's only one chance in a thousand this could have happened by coincidence," a 0.1% level of statistical significance is being implied. The lower the significance level, the stronger the evidence.

In some situations it is convenient to express the statistical significance as 1 − α. In general, when interpreting a stated significance, one must be careful to note what, precisely, is being tested statistically.

Different α-levels have different advantages and disadvantages. Smaller α-levels give greater confidence in the determination of significance, but run greater risks of failing to reject a false null hypothesis (a

Type II error , or "false negative determination"), and so have lessstatistical power . The selection of an α-level inevitably involves a compromise between significance and power, and consequently between theType I error and theType II error .In some fields, for example nuclear and particle physics, it is common to express statistical significance in units of "σ" (sigma), the

standard deviation of aGaussian distribution . A statistical significance of "$nsigma$" can be converted into a value of α via use of theerror function ::$alpha\; =\; 1\; -\; operatorname\{erf\}(n/sqrt\{2\})$

The use of σ is motivated by the ubiquitous emergence of the Gaussian distribution in measurement uncertainties. For example, if a theory predicts a parameter to have a value of, say, 100, and one measures the parameter to be 109 ± 3, then one might report the measurement as a "3σ deviation" from the theoretical prediction. In terms of α, this statement is equivalent to saying that "assuming the theory is true, the likelihood of obtaining the experimental result by coincidence is 0.27%" (since 1 − erf(3/√2) = 0.0027).

Fixed significance levels such as those mentioned above may be regarded as useful in exploratory data analyses. However, modern statistical advice is that, where the outcome of a test is essentially the final outcome of an experiment or other study, the p-value should be quoted explicitly. And, importantly, it should be quoted whether or not the p-value is judged to be significant. This is to allow maximum information to be transferred from a summary of the study into

meta-analyses .**Pitfalls**A common misconception is that a statistically significant result is always of practical significance, or demonstrates a large effect in the population. Unfortunately, this problem is commonly encountered in scientific writing. Given a sufficiently large sample, extremely small and non-notable differences can be found to be statistically significant, and statistical significance says nothing about the practical significance of a difference.

One of the more common problems in significance testing is the tendency for

multiple comparisons to yield spurious significant differences even where the null hypothesis is true. For instance, in a study of twenty comparisons, using an α-level of 5%, one comparison will likely yield a significant result despite the null hypothesis being true. In these cases p-values are adjusted in order to control either thefalse discovery rate or thefamilywise error rate .An additional problem is that

frequentist analyses of p-values are considered by some to overstate "statistical significance".cite journal | author = Goodman S | title = Toward evidence-based medical statistics. 1: The P value fallacy. | journal = Ann Intern Med | volume = 130 | issue = 12 | pages = 995–1004 | year = 1999 | pmid = 10383371 |url=http://www.annals.org/cgi/pmidlookup?view=long&pmid=10383371] cite journal | author = Goodman S | title = Toward evidence-based medical statistics. 2: The Bayes factor. | journal = Ann Intern Med | volume = 130 | issue = 12 | pages = 1005–13 | year = 1999 | pmid = 10383350 |url=http://www.annals.org/cgi/pmidlookup?view=long&pmid=10383350] SeeBayes factor for details.Yet another common pitfall often happens when a researcher writes the ambiguous statement "we found no statistically significant difference," which is then misquoted by others as "they found that there was no difference." Actually, statistics cannot be used to prove that there is exactly zero difference between two populations. Failing to find evidence that there is a difference does not constitute evidence that there is no difference. This principle is sometimes described by the maxim "Absence of evidence is not evidence of absence."

According to

J. Scott Armstrong , attempts to educate researchers on how to avoid pitfalls of using statistical significance have had little success. In the papers "Significance Tests Harm Progress in Forecasting," [*cite journal | author = Armstrong, J. Scott | title = Significance tests harm progress in forecasting | journal = International Journal of Forecasting | volume = 23 | pages = 321–327 | year = 2007 | doi = 10.1016/j.ijforecast.2007.03.004*] and "Statistical Significance Tests are Unnecessary Even When Properly Done," [*cite journal | author = Armstrong, J. Scott | title = Statistical Significance Tests are Unnecessary Even When Properly Done | journal = International Journal of Forecasting | volume = 23 | pages = 335–336 | year = 2007 | doi = 10.1016/j.ijforecast.2007.01.010*] Armstrong makes the case that even when done properly, statistical significance tests are of no value. A number of attempts failed to find empirical evidence supporting the use of significance tests. Tests of statistical significance are harmful to the development of scientific knowledge because they distract researchers from the use of proper methods. Armstrong suggests authors should avoid tests of statistical significance; instead, they should report oneffect size s,confidence intervals , replications/extensions , andmeta-analyses .Use of the statistical significance test has been called seriously flawed and unscientific by authors Deirdre McCloskey and Stephen Ziliak. They point out that "insignificance" does not mean unimportant, and propose that the scientific community should abandon usage of the test altogether, as it can cause false hypotheses to be accepted and true hypotheses to be rejected.cite book | last=McCloskey| first=Deirdre N.| coauthors = Stephen T. Ziliak | title = The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives (Economics, Cognition, and Society) | publisher = The University of Michigan Press | city = Ann Arbor | year = 2008 |isbn=0472050079 ]

**Signal–noise ratio conceptualisation of significance**Statistical significance can be considered to be the confidence one has in a given result. In a comparison study, it is dependent on the relative difference between the groups compared, the amount of measurement and the noise associated with the measurement. In other words, the confidence one has in a given result being non-random (i.e. it is not a consequence of

chance ) depends on thesignal-to-noise ratio (SNR) and the sample size.Expressed mathematically, the confidence that a result is not by random chance is given by the following formula by Sackett: [

*cite journal |author=Sackett DL |title=Why randomized controlled trials fail but needn't: 2. Failure to employ physiological statistics, or the only formula a clinician-trialist is ever likely to need (or understand!) |journal=CMAJ |volume=165 |issue=9 |pages=1226–37 |year=2001 |month=October |pmid=11706914 |pmc=81587 |doi= |url=http://www.cmaj.ca/cgi/pmidlookup?view=long&pmid=11706914*]:$mathrm\{confidence\}\; =\; frac\{mathrm\{signal\{mathrm\{noise\; imes\; sqrt\{mathrm\{sample\; size.$

For clarity, the above formula is presented in tabular form below.

**Dependence of confidence with noise, signal and sample size (tabular form)**In words, the dependence of confidence is high if the noise is low and/or the sample size is large and/or the

effect size (signal) is large. The confidence of a result (and its associatedconfidence interval ) is "not" dependent on effect size alone. If the sample size is large and the noise is low a small effect size can be measured with great confidence. Whether a small effect size is considered important is dependent on the context of the events compared.In medicine, small effect sizes (reflected by small increases of risk) are often considered clinically relevant and are frequently used to guide treatment decisions (if there is great confidence in them). Whether a given treatment is considered a worthy endeavour is dependent on the risks, benefits and costs.

**ee also***

A/B testing

*ABX test

*Fisher's method for combining independent tests of significance

* Reasonable doubt**References****External links*** Raymond Hubbard, M.J. Bayarri, " [

*http://ftp.isds.duke.edu/WorkingPapers/03-26.pdf P Values are not Error Probabilities*] ". A working paper that explains the difference between Fisher's evidential p-value and the Neyman-Pearson Type I error rate $alpha$.

* [*http://www.ericdigests.org/1995-1/testing.htm The Concept of Statistical Significance Testing*] - Article by Bruce Thompon of the ERIC Clearinghouse on Assessment and Evaluation, Washington, D.C.

*Wikimedia Foundation.
2010.*