- Raven paradox
The Raven paradox, also known as Hempel's paradox or Hempel's ravens is a
paradox proposed by the German logicianCarl Gustav Hempel in the 1940s to illustrate a problem whereinductive logic violates intuition. It reveals theproblem of induction .The paradox
Hempel describes the paradox in terms of the
hypothesis [Hempel, CG (1945) Studies in the Logic of Confirmation I. Mind Vol 54, No. 213 p.1 [http://www.jstor.org/sici?sici=0026-4423(194501)2%3A54%3A213%3C1%3ASITLOC%3E2.0.CO%3B2-0 JSTOR] ] [Hempel, CG (1945) Studies in the Logic of Confirmation II. Mind Vol 54, No. 214 p.97 [http://www.jstor.org/sici?sici=0026-4423(194504)2%3A54%3A214%3C97%3ASITLOC%3E2.0.CO%3B2-M JSTOR] ] : : (1) "Allraven s areblack ".In strict
logic al terms, via the Law of Implication, this statement is equivalent to:: (2) "Everything that is not black is not a raven."It should be clear that in all circumstances where (2) is true, (1) is also true; and likewise, in all circumstances where (2) is false (i.e. if we imagine a world in which something that was not black, yet was a raven, existed), (1) is also false. This establishes logical equivalence.
Given a general statement such as "all ravens are black", we would generally consider a form of the same statement that refers to a specific observable instance of the general class to constitute evidence for that general statement. For example,: (3) "Nevermore, my pet raven, is black."is clearly evidence supporting the hypothesis that "all ravens are black".
The paradox arises when this same process is applied to statement (2). On sighting a green apple, we can observe:: (4) "This green (and thus not black) thing is an apple (and thus not a raven)."By the same reasoning, this statement is evidence that (2) "everything that is not black is not a raven." But since (as above) this statement is logically equivalent to (1) "all ravens are black", it follows that the sight of a green apple offers evidence that all ravens are black.
Proposed Resolutions
Two apparently reasonable premises:
: The Equivalence Condition (EC): If a proposition, X, provides evidence in favor of another proposition Y, then X also provides evidence in favor of any proposition which is logically equivalent to Y.
and
:
Nicod 's Criterion (NC): A proposition of the form "All P are Q" is supported by the observation of a particular P which is Q.can be combined to reach the paradoxical conclusion:
: (PC): The observation of a green apple provides evidence that all ravens are black.
A resolution to the paradox must therefore either accept (PC) or reject(EC) or reject (NC). A satisfactory resolution should also explain"why" there naively appears to be a paradox. Solutions whichaccept the paradoxical conclusion can do this by presenting aproposition which we intuitively know to be false but whichis easily confused with (PC), while solutions which reject (EC)or (NC) should present a proposition which we intuitively knowto be true but which is easily confused with (EC) or (NC) [Maher, P(1999) Inductive Logic and the Ravens Paradox, Philosphoy of Science,66, p.50 [http://www.jstor.org/sici?sici=0031-8248(199903)66%3A1%3C50%3AILATRP%3E2.0.CO%3B2-9 JSTOR] ] .
Approaches which Accept the Paradoxical Conclusion
Hempel's Resolution
Hempel himself accepted the paradoxical conclusion, arguing thatthe reason the result appears paradoxical is because we possessprior information without which the observation of a non-blacknon-raven would indeed provide evidence that all ravens are black.
He illustrates this with the example of the generalization "All sodiumsalts burn yellow", and asks us to consider the observation which occurswhen somebody holds a piece of pure ice in a colorless flame which doesnot turn yellow.
:This result would confirm the assertion, "Whatever does not burn yellow is no sodium salt", and consequently, by virtue of the equivalence condition, it would confirm the original formulation. Why does this impress us as paradoxical? The reason becomes clear when we compare the previous situation with the case of an experiment where an object whose chemical constitution is as yet unknown to us is held into a flame and fails to turn it yellow, and where subsequent analysis reveals it to contain no sodium salt. This outcome, we should no doubt agree, is what was to be expected on the basis of the hypothesis ... thus the data here obtained constitute confirming evidence for the hypothesis.
: In the seemingly paradoxical cases of confirmation, we are often not actually judging the relation of the given evidence, E alone to the hypothesis H ... we tacitly introduce a comparison of H with a body of evidence which consists of E in conjunction with an additional amount of information which we happen to have at our disposal; in our illustration, this information includes the knowledge (1) that the substance used in the experiment is ice, and (2) that ice contains no sodium salt. If we assume this additional information as given, then, of course, the outcome of the experiment can add no strength to the hypothesis under consideration. But if we are careful to avoid this tacit reference to additional knowledge ... the paradoxes vanish. [Hempel, CG (1945) Studies in the Logic of Confirmation I. Mind Vol 54, No. 213 p.1 [http://www.jstor.org/sici?sici=0026-4423(194501)2%3A54%3A213%3C1%3ASITLOC%3E2.0.CO%3B2-0 JSTOR] ]
The Standard Bayesian Solution
One of the most popular proposed resolutions is to accept theconclusion that the observation of a green apple providesevidence that all ravens are black but to argue that theamount of confirmation provided is very small, due tothe large discrepancy between the number of ravens andthe number of non-black objects. According to this resolution,the conclusion appears paradoxical because we intuitively estimate theamount of evidence provided by the observation of a green appleto be zero, when it is in fact non-zero but very small.
I J Good 's presentation of thisargument in 1960 [ Good, IJ (1960) The Paradox of Confirmation, "The British Journal for the Philosophy of Science", Vol. 11, No. 42, 145-149 [http://links.jstor.org/sici?sici=0007-0882%28196008%2911%3A42%3C145%3ATPOC%3E2.0.CO%3B2-5 JSTOR] ] is perhaps the best known, and variations of the argument have been popular eversince [Fitelson, B and Hawthorne, J (2006) How Bayesian Confirmation Theory Handles the Paradox of the Ravens, in Probability in Science, Chicago: Open Court [http://fitelson.org/ravens.pdf Link] ] although it had been presented in 1958 [Alexander, HG (1958) The Paradoxes of Confirmation, The British Journal for the Philosophy of Science, Vol. 9, No. 35, P. 227 [http://www.jstor.org/stable/685654?origin=JSTOR-pdf JSTOR] ] andearly forms of the argument appeared as early as 1940 [Hosaisson-Lindenbaum, J (1940) On Confirmation, The Journal of Symbolic Logic, Vol. 5, No. 4, p. 133 [http://www.jstor.org/action/showArticle?doi=10.2307/2268173 JSTOR] ] .Good's argument involves calculating the
weight of evidence provided by the observation of a black raven or a white shoe in favor of the hypothesis that all the ravens in a collection of objects are black. The weight of evidence is the logarithm of theBayes factor , which in this case is simply the factor by which theodds of the hypothesis changes when the observation is made. The argument goes as follows:Quote|... suppose that there are objects that might be seen at any moment, of which are ravens and are black, and that the objects each have probability 1/ of being seen. Let be the hypothesis that there are non-black ravens, and suppose that the hypotheses are initially equiprobable. Then, if we happen to see a black raven, the Bayes factor in favour of is
:average
i.e. about 2 if the number of ravens in existence is known to be large. But the factor if we see a white shoe is only
:average
:
:and this exceeds unity by only about r/(2N-2b) if N-b is large compared to r. Thus the weight of evidence provided by the sight of a white shoe is positive, but is small if the number of ravens is known to be small compared to the number of non-black objects. [Note: Good used "crow" instead of "raven", but "raven" has been used here throughout for consistency.]
Many of the proponents of this resolution and variants of it have been advocates of Bayesian probability, and it is now commonly called the Bayesian Solution, although, as Chihara [Chihara, (1981) Some Problems for Bayesian Confirmation Theory, British Journal for the Philosophy of Science, Vol. 38, No. 4 [http://bjps.oxfordjournals.org/cgi/reprint/38/4/551 LINK] ] observes, "there is no such thing as "the" Bayesian solution. Thereare many different `solutions' that Bayesians have put forwardusing Bayesian techniques." Noteworthy approaches using Bayesiantechniques include Earman, [Earman, 1992 Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory, MIT Press, Cambridge, MA.] , Eells [Eells, 1982 Rational Decision and Causality. New York: Cambridge University Press] , Gibson [Gibson, 1969 On Ravens and Relevance and a Likelihood Solution of the Paradox of Confirmation, [http://www.jstor.org/stable/686720 LINK] ] , Hosaisson-Lindenbaum [Hosaisson-Lindenbaum 1940] , Howson and Urbach [Howson, Urbach, 1993 Scientific Reasoning: The Bayesian Approach, Open Court Publishing Company] , Mackie [Mackie, 1963 The Paradox of Confirmation, Brit. J. Phil. Sci. Vol. 13, No. 52, p. 265 [http://bjps.oxfordjournals.org/cgi/content/citation/XIII/52/265 LINK] ] andHintikka [Hintikka, 1969] , who claims that his approach is "more Bayesian than the so-called `Bayesian solution' of the sameparadox." Bayesian approaches which make use of Carnap's theory of inductive inference includeHumburg [Humburg 1986, The solution of Hempel's raven paradox in Rudolf Carnap's system of inductive logic, Erkenntnis, Vol. 24, No. 1, pp] , Maher, [Maher 1999] and Fitelson et al. [Fitelson 2006] . Vranas [Vranas (2002) Hempel`s Raven Paradox: A Lacuna in the Standard Bayesian Solution [http://philsci-archive.pitt.edu/archive/00000688/00/hempelacuna.doc LINK] ] introducedthe term "Standard Bayesian Solution" to avoid confusion.
The Carnapian Approach
Maher [Maher, 1999] accepts the paradoxical conclusion, and refines it:
Quote|A non-raven (of whatever color) confirms that all ravens are black because
:(i) the information that this object is not a raven removes the possibility that this object is a counterexample to the generalization, and
:(ii) it reduces the probability that unobserved objects are ravens, thereby reducing the probability that they are counterexamples to the generalization.
In order to reach (ii), he appeals to Carnap's theory of inductiveprobability, which is (from the Bayesian point of view) a way of assigningprior probabilities which naturally implements induction. According toCarnap's theory, the posterior probability, , that an object, , will have a predicate, , after the evidence has been observed, is:
:
where is the initial probability that hasthe predicate ; is the number of objects which have been examined (according to the available evidence ); is the number of examined objects which turned out to have the predicate , and is a constant which measures resistance to generalization.
If is close to zero, will be veryclose to one after a single observation of an object which turned out to have the predicate ,while if is much larger than , will be very close to regardless of the fraction of observedobjects which had the predicate .
Using this Carnapian approach, Maher identifies a proposition which we intuitively (and correctly) know to be false, but which we easily confuse with the paradoxical conclusion. The proposition in questionis the proposition that observing non-ravens tells us about the color of ravens.While this is intuitively false and is also false according to Carnap's theoryof induction, observing non-ravens (according to that same theory) causes usto reduce our estimate of the total number of ravens, and thereby reducesthe estimated number of possible counterexamples to the rule that all ravensare black.
Hence, from the Bayesian-Carnapian point of view, the observation of a non-ravendoes not tell us anything about the color of ravens, but it tells us aboutthe prevalence of ravens, and supports "All ravens are black" by reducingour estimate of the number of ravens which might not be black.
The Role of Background Knowledge
Much of the discussion of the paradox in general and the Bayesian approachin particular has centred on the relevance of background knowledge.Surprisingly, Maher [ Maher, 1999] shows that, for a large class of possible configurations of background knowledge, the observation of a non-black non-raven provides "exactly the same" amount of confirmation as the observation of a black raven. The configurations of background knowledge which he considers are those which are provided by a "sample proposition", namely a proposition which is a
conjunction of atomic propositions, each of which ascribes a single predicate to a single individual, with no two atomic propositions involving the same individual. Thus, a proposition of the form "A is a black raven and B is a white shoe" can be considered a sample proposition by taking "black raven" and "white shoe" to be predicates.Maher's proof appears to contradict the result of the Bayesian argument, which was that the observation of a non-black non-raven provides much less evidence than the observation of a black raven. The reason is that the background knowledge which Good and others use can not be expressed in the form of a sample proposition - in particular, variants of the standard Bayesian approach often suppose (as Good did in the argument quoted above) that the total numbers of ravens, non-black objects and/or the total number of objects, are known quantities. Maher comments that, "The reason we think there are more non-black things than ravens is because that has been true of the things we have observed to date. Evidence of this kind can be represented by a sample proposition. But ... given any sample proposition as background evidence, a non-black non-raven confirms A just as strongly as a black raven does ... Thus my analysis suggests that this response to the paradox [i.e. the Standard Bayesian one] cannot be correct."
Fitelson et al. [ Fitelson, 2006] examined the conditions under which the observation of a non-black non-raven provides less evidence than the observationof a black raven. They show that, if is an object selected at random, is the proposition that the object is black, and is the proposition that the object is a raven, then the condition:
:
is sufficient for the observation of a non-black non-raven to provide lessevidence than the observation of a black raven. Here, a line over aproposition indicates the logical negation of that proposition.
This condition does not tell us "how large" the differencein the evidence provided is, but a later calculation in the same papershows that the weight of evidence provided by a black raven exceedsthat provided by a non-black non-raven by about . This is equal to the amount ofadditional information (in bits, if the base of the logarithm is 2) which is provided when a raven of unknown color is discovered to be black, given the hypothesis that not all ravens are black.
Fitelson et al. [ Fitelson, 2006] explain that:
:Under normal circumstances, may be somewhere around 0.9 or 0.95; so is somewhere around 1.11 or 1.05. Thus, it may appear that a single instance of a black raven does not yield much more support than would a non-black non-raven. However, under plausible conditions it can be shown that a sequence of instances (i.e. of n black ravens, as compared to n non-black non-ravens) yields a ratio of likelihood ratios on the order of , which blows up significantly for large .
The authors point out that their analysis is completelyconsistent with the supposition that a non-black non-ravenprovides an extremely small amount of evidence althoughthey do not attempt to prove it; they merely calculate the difference betweenthe amount of evidence that a black raven provides and the amount of evidence that a non-black non-raven provides.
Rejecting Nicod's Criterion
The Red Herring
Good [Good 1967, The White Shoe is a Red Herring, British Journal for the Philosophy of Science, Vol. 17, No. 4, p322 [http://www.jstor.org/stable/686774 JSTOR] ] gives an exampleof background knowledge with respect to which the observation of a black raven"decreases" the probability that all ravens are black:
:Suppose that we know we are in one or other of two worlds, and the hypothesis, H, under consideration is that all the ravens in our world are black. We know in advance that in one world there are a hundred black ravens, no non-black ravens, and a million other birds; and that in the other world there are a thousand black ravens, one white raven, and a million other birds. A bird is selected equiprobably at random from all the birds in our world. It turns out to be a black raven. This is strong evidence ... that we are in the second world, wherein not all ravens are black.
Good concludes that the white shoe is a "red herring": Sometimes even a black ravencan constitute evidence "against" the hypothesis that all ravens are black, sothe fact that the observation of a white shoe can support it is not surprising and not worth attention. Nicod's criterion is false, according to Good, and so theparadoxical conclusion does not follow.
Hempel rejected this as a solution to the paradox, insisting that the proposition 'c is a raven and is black' must be considered "by itself and without reference to any other information", and pointing out that it "... was emphasized in section 5.2(b) of my article in Mind ... that the very appearance of paradoxicality in cases like that of the white shoe results in part from a failure to observe this maxim." [Hempel 1967, The White Shoe - No Red Herring, The British Journal for the Philosophy of Science, Vol. 18, No. 3, p. 239 [http://www.jstor.org/stable/686596 JSTOR] ]
The question which then arises is whether the paradox is to be understood in the context of absolutely no background information (as Hempel suggests), or in the context of the background information which we actually possess regarding ravens and black objects, or with regard to all possible configurations of background information. Good had shown that, for some configurations of background knowledge, Nicod's criterionis false (provided that we are willing to equate "inductively support" with "increase the probability of" - see below). The possibility remained that, withrespect to our actual configuration of knowledge, which is very different fromGood's example, Nicod's criterion might still be true and so we could stillreach the paradoxical conclusion. Hempel, on the other hand, insists thatit is our background knowledge itself which is the red herring, and thatwe should consider induction with respect to a condition of perfect ignorance.
Good's Baby
In his proposed resolution, Maher implicitly made use of the fact that the proposition "All ravens are black" is highly probable when it is highly probable that there are no ravens. Good had used this fact before to respondto Hempel's insistence that Nicod's criterion was to be understood to holdin the absence of background information [Good 1968, The White Shoe qua Red Herring is Pink, The British Journal for the Philosophy of Science, Vol. 19, No. 2, p. 156 [http://www.jstor.org/stable/686795 JSTOR] ] :
:...imagine an infinitely intelligent newborn baby having built-in neural circuits enabling him to deal with formal logic, English syntax, and subjective probability. He might now argue, after defining a raven in detail, that it is extremely unlikely that there are any ravens, and therefore it is extremely likely that all ravens are black, that is, that is true. 'On the other hand', he goes on to argue, 'if there are ravens, then there is a reasonable chance that they are of a variety of colours. Therefore, if I were to discover that even a black raven exists I would consider to be less probable than it was initially.'
This, according to Good, is as close as one can reasonably expect to get to a condition of perfect ignorance, and it appears that Nicod's condition is still false. Maher made Good's argument more precise by using Carnap's theory of inductionto formalize the notion that if there is one raven, then it is likelythat there are many [Maher 2004, Probability Captures the Logic of Scientific Confirmation [http://patrick.maher1.net/pctl.pdf LINK] ] .
Maher's argument considers a universe of exactly two objects, each of whichis very unlikely to be a raven (a one in a thousand chance) and reasonably unlikely to be black (a one in ten chance). Using Carnap's formula for induction,he finds that the probability that all ravens are black decreases from 0.9985to 0.8995 when it is discovered that one of the two objects is a black raven.
Maher concludes that not only is the paradoxical conclusion true, butthat Nicod's criterion is false in the absence of background knowledge(except for the knowledge that the number of objects in the universeis two and that ravens are less likely than black things).
Distinguished Predicates
Quine [Quine, WV (1969) Natural Kinds, in Ontological Relativity and other Essays. New York:Columbia university Press, p.114] argued that the solution to the paradoxlies in the recognition that certain predicates, which hecalled
natural kind s, have a distinguished status with respectto induction. This can be illustrated withNelson Goodman 'sexample of the predicate grue. An object is grueif it is blue before (say) 2010 and green afterwards. Clearly, weexpect objects which were blue before 2010 to remain blue afterwards,but we do not expect the objects which were found to be grue before 2010 to be grue afterwards. Quine's explanation is that "blue" isa natural kind; a privileged predicate which can be used for induction,while "grue" is not a natural kind and using induction with it leadsto error.This suggests a resolution to the paradox - Nicod's criterion istrue for natural kinds, such as "blue" and "black", but is falsefor artificially contrived predicates, such as "grue" or "non-raven".The paradox arises, according to this resolution, because we implicitlyinterpret Nicod's criterion as applying to all predicates when in factit only applies to natural kinds.
Another approach which favours specific predicates over others was taken by Hintikka [Hintikka, 1969] . Hintikka was motivatedto find a Bayesian approach to the paradox which did not make useof knowledge about the relative frequencies of ravens and black things.Arguments concerning relative frequencies, he contends, cannot always account for the perceived irrelevance of evidence consisting ofobservations of objects of type A for the purposes of learningabout objects of type not-A.
His argument can be illustrated by rephrasing the paradox usingpredicates other than "raven" and "black". For example, "All menare tall" is equivalent to "All short people are women", and soobserving that a randomly selected person is a short woman shouldprovide evidence that all men are tall. Despite the fact thatwe lack background knowledge to indicate that there are dramaticallyfewer men than short people, we still find ourselves inclined toreject the conclusion. Hintikka's example is: "... a generalizationlike 'no material bodies are infinitely divisible' seems to be completelyunaffected by questions concerning immaterial entities, independentlyof what one thinks of the relative frequencies of material andimmaterial entities in one's universe of discourse."
His solution is to introduce an "order" into the set of predicates. When the logical system is equipped with this order,it is possible to restrict the "scope" of a generalization such as "All ravens are black" so that it applies to ravens only and not to non-black things, since the order privileges ravens over non-black things. As he puts it:
:If we are justified in assuming that the scope of the generalization 'All ravens are black' can be restricted to ravens, then this means that we have some outside information which we can rely on concerning the factual situation. The paradox arises from the fact that this information, which colors our spontaneous view of the situation, is not incorporated in the usual treatments of the inductive situation. [Hintakka J. 1969, Inductive Independence and the Paradoxes of Confirmation [http://books.google.com/books?id=pWtPcRwuacAC&pg=PA24&lpg=PA24&ots=-1PKZt0Jbz&lr=&sig=EK2qqOZ6-cZR1P1ZKIsndgxttMs LINK] ]
Proposed Resolutions which Reject the Equivalence Condition
elective Confirmation
Scheffler and
Goodman [Scheffler I, Goodman NJ, Selective Confirmation and the Ravens, Journal of Philosophy, Vol. 69, No. 3, 1972 [http://www.jstor.org/stable/2024647 JSTOR] ] took an approach to the paradox which incorporatesKarl Popper 's view that scientific hypotheses are never really confirmed,only falsified.The approach begins by noting that the observation of a black raven does not prove that "All ravens are black" but it falsifies the contrary hypothesis, "No ravens are black". A non-black non-raven, on the other hand, is consistent with both "All ravens are black" and with "No ravens are black". As the authors put it:
:... the statement that all ravens are black is not merely "satisfied" by evidence of a black raven but is "favored" by such evidence, since a black raven disconfirms the contrary statement that all ravens are not black, i.e. satisfies its denial. A black raven, in other words, satisfies the hypothesis "that all ravens are black rather than not:" it thus selectively confirms "that all ravens are black".
Selective confirmation violates the equivalence condition sincea black raven selectively confirms "All ravens are black" butnot "All non-black things are non-ravens".
Probabilistic or Non-Probabilistic Induction
Scheffler and Goodman's concept of selective confirmation isan example of an interpretation of "provides evidence in favor of" which does not coincide with "increase the probability of". This must be a general feature of all resolutions which reject theequivalence condition, since logically equivalent propositions mustalways have the same probability.
It is impossible for the observation of a black raven to increase the probability of the proposition"All ravens are black" without causing exactly the samechange to the probability that "All non-black things are non-ravens".If an observation inductively supports the former but not thelatter, then "inductively support" must refer to something otherthan changes in the probabilities of propositions. A possible loopholeis to interpret "All" as "Nearly all" - "Nearly all ravens are black"is not equivalent to "Nearly all non-black things are non-ravens", andthese propositions can have very different probabilities [Gaifman, H (1979) Subjective Probability, Natural Predicates and Hempel's Ravens, Erkenntnis, Vol. 14, p. 105 [http://www.springerlink.com/content/vw370x1531q54422/ Springer] ] .
This raises the broader question of the relation of probabilitytheory to inductive reasoning.
Karl Popper argued that probability theory alone cannot account for induction. His argument involves splitting a hypothesis,, into a part which is deductively entailed by theevidence, , and another part. This can be done in twoways.First, consider the splitting [Popper, K. Realism and the Aim of Science, Routlege, 1992, p. 325] :
:
where , and areprobabilistically independent: andso on. The condition which is necessary for such a splittingof H and E to be possible is , that is,that is probabilistically supported by .
Popper's observation is that the part, , of which receives support from actually follows deductivelyfrom , while the part of which does notfollow deductively from receives no support at all from - that is, .
Second, the splitting [Popper K, Miller D, (1983) A Proof of the Impossibility of Inductive Probability, Nature, Vol. 302, p. 687 [http://www.nature.com/nature/journal/v302/n5910/abs/302687a0.html Link] ] :
:
separates into , which asPopper says, "is the logically strongest part of (or of the content of ) that follows [deductively] from ," and , which, he says, "contains all of that goes beyond ."He continues:
:Does , in this case, provide any support for the factor , which in the presence of is alone needed to obtain ? The answer is: No. It never does. Indeed, countersupports unless either or (which are possibilities of no interest). ...
:This result is completely devastating to the inductive interpretation of the calculus of probability. All probabilistic support is purely deductive: that part of a hypothesis that is not deductively entailed by the evidence is always strongly countersupported by the evidence ... There is such a thing as probabilistic support; there might even be such a thing as inductive support (though we hardly think so). But the calculus of probability reveals that probabilistic support cannot be inductive support.
The Orthodox Approach
The orthodox Neyman-Pearson theoryof hypothesis testing considers how to decide whether to"accept" or "reject" a hypothesis, rather than whatprobability to assign to the hypothesis. From this point of view,the hypothesis that "All ravens are black" is not accepted"gradually", as its probability increases towards one when moreand more observations are made, but is accepted in a single action as the result of evaluating the data which has already been collected. As Neyman and Pearson put it:
:Without hoping to know whether each separate hypothesis is true or false, we may search for rules to govern our behaviour with regard to them, in following which we insure that, in the long run of experience, we shall not be too often wrong. [Neyman J, Pearson ES (1933) On the Problem of the Most Efficient Tests of Statistical Hypotheses, Phil. Transactions of the Royal Society of London. Series A, Vol. 231, p289 [http://www.jstor.org/stable/91247 JSTOR] ]
According to this approach, it is not necessary to assign anyvalue to the probability of a "hypothesis", although one mustcertainly take into account the probability of the "data"given the hypothesis, or given a competing hypothesis, when decidingwhether to accept or to reject. The acceptance or rejection of a hypothesiscarries with it the risk of error.
This contrasts with the Bayesian approach, which requires that thehypothesis be assigned a prior probability, which is revised in the light of the observed data to obtain the final probability of thehypothesis. Within the Bayesian framework there is no risk of errorsince hypotheses are not accepted or rejected; instead they areassigned probabilities.
An analysis of the paradox from the orthodox point of view has beenperformed, and leads to, among other insights, a rejection of the equivalence condition:
:It seems obvious that one cannot both "accept" the hypothesis that all P's are Q and also reject the contrapositive, i.e. that all non-Q's are non-P. Yet it is easy to see that on the Neyman-Pearson theory of testing, a test of "All P's are Q" is "not" necessarily a test of "All non-Q's are non-P" or vice versa. A test of "All P's are Q" requires reference to some alternative statistical hypothesis of the form of all P's are Q,
Wikimedia Foundation. 2010.