Newcomb's paradox

Newcomb's paradox

Newcomb's paradox, also referred to as Newcomb's problem, is a thought experiment involving a game between two players, one of whom purports to be able to predict the future. Whether the problem is actually a paradox is disputed.

Newcomb's paradox was created by William Newcomb of the University of California's Lawrence Livermore Laboratory. However, it was first analyzed and was published in a philosophy paper spread to the philosophical community by Robert Nozick in 1969, and appeared in Martin Gardner's Scientific American column in 1974. Today it is a much debated problem in the philosophical branch of decision theory but has received little attention from the mathematical side.

Contents

The problem

A person is playing a game operated by the Predictor, an entity somehow presented as being exceptionally skilled at predicting people's actions. The exact nature of the Predictor varies between retellings of the paradox. Some assume that the character always has a reputation for being completely infallible and incapable of error; others assume that the predictor has a very low error rate. The Predictor can be presented as a psychic, as a superintelligent alien, as a deity, as a brain-scanning computer, etc. However, the original discussion by Nozick says only that the Predictor's predictions are "almost certainly" correct, and also specifies that "what you actually decide to do is not part of the explanation of why he made the prediction he made". With this original version of the problem, some of the discussion below is inapplicable.

The player of the game is presented with two boxes, one transparent (labeled A) and the other opaque (labeled B). The player is permitted to take the contents of both boxes, or just the opaque box B. Box A contains a visible $1,000. The contents of box B, however, are determined as follows: At some point before the start of the game, the Predictor makes a prediction as to whether the player of the game will take just box B, or both boxes. If the Predictor predicts that both boxes will be taken, then box B will contain nothing. If the Predictor predicts that only box B will be taken, then box B will contain $1,000,000.

By the time the game begins, and the player is called upon to choose which boxes to take, the prediction has already been made, and the contents of box B have already been determined. That is, box B contains either $0 or $1,000,000 before the game begins, and once the game begins even the Predictor is powerless to change the contents of the boxes. Before the game begins, the player is aware of all the rules of the game, including the two possible contents of box B, the fact that its contents are based on the Predictor's prediction, and knowledge of the Predictor's infallibility. The only information withheld from the player is what prediction the Predictor made, and thus what the contents of box B are.

Predicted choice Actual choice Payout
A and B A and B $1,000
A and B B only $0
B only A and B $1,001,000
B only B only $1,000,000

The problem is called a paradox because two strategies that both sound intuitively logical give conflicting answers to the question of what choice maximizes the player's payout. The first strategy argues that, regardless of what prediction the Predictor has made, taking both boxes yields more money. That is, if the prediction is for both A and B to be taken, then the player's decision becomes a matter of choosing between $1,000 (by taking A and B) and $0 (by taking just B), in which case taking both boxes is obviously preferable. But, even if the prediction is for the player to take only B, then taking both boxes yields $1,001,000, and taking only B yields only $1,000,000—taking both boxes is still better, regardless of which prediction has been made.

The second strategy suggests taking only B. By this strategy, we can ignore the possibilities that return $0 and $1,001,000, as they both require that the Predictor has made an incorrect prediction, and the problem states that the Predictor is almost never wrong. Thus, the choice becomes whether to receive $1,000 (both boxes) or to receive $1,000,000 (only box B)—so taking only box B is better.

In his 1969 article, Nozick noted that "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."

If the player believes that the predictor can correctly predict any thoughts he or she will have, but has access to some source of random numbers that the predictor cannot predict (say, a coin to flip, or a quantum process), then the game depends on how the predictor will react to (correctly) knowing that the player will use such a process. If the predictor predicts by reproducing the player's process, then the player should open both boxes with 1/2 probability and will receive an average of $251,000; if the predictor predicts the most probable player action, then the player should open both with 1/2 - epsilon probability and will receive an average of ~$500,999.99; and if the predictor places $0 whenever they believe that the player will use a random process, then the traditional "paradox" holds unchanged.

The crux of the problem

The crux of the paradox is in the existence of two contradictory arguments, both being seemingly correct.

  1. A powerful intuitive belief, that past events cannot be affected. My future action cannot determine the fate of an event that happened before the action.
  2. Newcomb proposes a way of doing precisely this - affecting a past event. The prediction of the Predictor establishes equivalence between my choice (of renouncing the open box) and the content of the closed box, which was determined in the past. Since I can affect the future event, I can also affect the past event, which is equivalent to it.

The use of first person in the formulation of the second argument is essential: only when playing the role of the chooser I feel that I determine the fate of the past event. Looking from aside at another person participating in the experiment does not arouse a feeling of contradiction. His choice and its prediction are part of a causal chain, that in principle is not problematic.

A solution of the paradox must point out an error in one of the two arguments. Either the intuition is wrong, or there is something wrong with the way proposed for affecting the past.

The relationship to the idle argument

There is a version of the famous idle argument (see fatalism) that is equivalent to the paradox. It is this:

Suppose that the omnipotent predictor predicted the grade I will get in tomorrow's exam, and wrote his prediction in a note. Since the content of the note was determined a while ago, I cannot change it. Since I believe that it reflects precisely the grade I will get, I cannot also change my grade. So I can just as well rest, rather than prepare for the exam (hence the name "the idle argument").

In both situations an equivalence between a past event P and a future event F is used to draw a paradoxical conclusion, and both use the same argumentation. In Newcomb's paradox the claim is "I can determine F, hence I can change P", while in the idle argument the claim is "I cannot change P, hence I cannot determine F", which is the same argument, formulated in reverse direction.

Attempted resolutions

Many argue that the paradox is primarily a matter of conflicting decision making models. Using the expected utility hypothesis will lead one to believe that one should expect the most utility (or money) from taking only box B. However if one uses the Dominance principle, one would expect to benefit most from taking both boxes.

More recent work has reformulated the problem as a noncooperative game in which players set the conditional distributions in a Bayes net. It is straight-forward to prove that the two strategies for which boxes to choose make mutually inconsistent assumptions for the underlying Bayes net. Depending on which Bayes net one assumes, one can derive either strategy as optimal. In this there is no paradox, only unclear language that hides the fact that one is making two inconsistent assumptions.[1]

Some argue that Newcomb's Problem is a paradox because it leads logically to self-contradiction. Reverse causation is defined into the problem and therefore logically there can be no free will. However, free will is also defined in the problem; otherwise the chooser is not really making a choice.

Other philosophers have proposed many solutions to the problem, many eliminating its seemingly paradoxical nature:

Some suggest a rational person will choose both boxes, and an irrational person will choose just the one, therefore rational people fare better, since the Predictor cannot actually exist. Others have suggested that an irrational person will do better than a rational person and interpret this paradox as showing how people can be punished for making rational decisions.

Others have suggested that in a world with perfect predictors (or time machines because a time machine could be the mechanism for making the prediction) causation can go backwards.[2] If a person truly knows the future, and that knowledge affects his actions, then events in the future will be causing effects in the past. Chooser's choice will have already caused Predictor's action. Some have concluded that if time machines or perfect predictors can exist, then there can be no free will and Chooser will do whatever he's fated to do. Others conclude that the paradox shows that it is impossible to ever know the future. Taken together, the paradox is a restatement of the old contention that free will and determinism are incompatible, since determinism enables the existence of perfect predictors. Some philosophers argue this paradox is equivalent to the grandfather paradox. Put another way, the paradox presupposes a perfect predictor, implying the "chooser" is not free to choose, yet simultaneously presumes a choice can be debated and decided. This suggests to some that the paradox is an artifact of these contradictory assumptions. Nozick's exposition specifically excludes backward causation (such as time travel) and requires only that the predictions be of high accuracy, not that they are absolutely certain to be correct. So the considerations just discussed are irrelevant to the paradox as seen by Nozick, which focuses on two principles of choice, one probabilistic and the other causal - assuming backward causation removes any conflict between these two principles.

Newcomb's paradox can also be related to the question of machine consciousness, specifically if a perfect simulation of a person's brain will generate the consciousness of that person.[3] Suppose we take the Predictor to be a machine that arrives at its prediction by simulating the brain of the Chooser when confronted with the problem of which box to choose. If that simulation generates the consciousness of the Chooser, then the Chooser cannot tell if he is standing in front of the boxes in the real world or in the virtual world generated by the simulation. The "virtual" Chooser would thus tell the Predictor which choice the "real" Chooser is going to make.

See also

Notes

  1. ^ Wolpert, D. H.; Benford, G. (2010). What does Newcomb's paradox teach us?. arXiv:1003.1343. 
  2. ^ Craig, William Lane (1988). "Tachyons, Time Travel, and Divine Omniscience". Journal of Philosophy 85 (3): 135–150. JSTOR 2027068. 
  3. ^ Neal, R. M. (2006). Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning. arXiv:math.ST/0608592. 

References

  • Nozick, Robert (1969), "Newcomb's Problem and Two principles of Choice," in Essays in Honor of Carl G. Hempel, ed. Nicholas Rescher, Synthese Library (Dordrecht, the Netherlands: D. Reidel), p. 114-115.
  • Bar-Hillel, Maya & Margalit, Avishai (1972), Newcomb's paradox revisited. British Journal of Philosophy of Science, 23, 295-304.
  • Gardner, Martin (1974), "Mathematical Games," Scientific American, March 1974, p. 102; reprinted with an addendum and annotated bibliography in his book The Colossal Book of Mathematics (ISBN 0-393-02023-1)
  • Campbell, Richmond and Lanning Sowden, ed. (1985), Paradoxes of Rationality and Cooperation: Prisoners' Dilemma and Newcomb's Problem, Vancouver: University of British Columbia Press. (an anthology discussing Newcomb's Problem, with an extensive bibliography)
  • Levi, Isaac (1982), "A Note on Newcombmania," Journal of Philosophy 79 (1982): 337-42. (a paper discussing the popularity of Newcomb's Problem)
  • John Collins, "Newcomb's Problem", International Encyclopedia of the Social and Behavioral Sciences, Neil Smelser and Paul Baltes (eds), Elsevier Science (2001) (Requires proper credentials)

External links


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • Newcomb's paradox — You are faced with two boxes. In one you can see $1,000, but you cannot see what is in the other. You may take either box singly, or you may take both. You are told that a Supreme Being has already placed $1,000,000 in the closed box if He has… …   Philosophy dictionary

  • Newcomb — may refer to: Contents 1 People 2 Places 3 Other uses 4 …   Wikipedia

  • William Newcomb — William Newcomb, a professor and theoretical physicist at the University of California s Lawrence Livermore Laboratory, is best known as the creator of Newcomb s paradox, devised in 1960. He was the great grandson of the brother of the astronomer …   Wikipedia

  • Ontological paradox — An ontological paradox is a paradox of time travel that questions the existence and creation of information and objects that travel in time. It is very closely related to the predestination paradox and usually occurs at the same time. Because of… …   Wikipedia

  • Predestination paradox — A predestination paradox (also called causal loop, causality loop, and, less frequently, closed loop or closed time loop) is a paradox of time travel that is often used as a convention in science fiction. It exists when a time traveller is caught …   Wikipedia

  • Bootstrap paradox — The bootstrap paradox is a paradox of time travel in which information or objects can exist without having been created. After information or an object is sent back in time, it is recovered in the present and becomes the very object/information… …   Wikipedia

  • Paradoja de Newcomb — Saltar a navegación, búsqueda La paradoja de Newcomb es el estudio de un juego entre dos jugadores, uno de los cuales puede predecir el futuro. La paradoja de Newcomb se considera una paradoja porque lleva a una autocontradicción. La causalidad… …   Wikipedia Español

  • Braess's paradox — Braess s paradox, credited to the mathematician Dietrich Braess, states that adding extra capacity to a network, when the moving entities selfishly choose their route, can in some cases reduce overall performance. This is because the equilibrium… …   Wikipedia

  • Braess-Paradox — Das Braess Paradoxon ist eine Veranschaulichung der Tatsache, dass eine zusätzliche Handlungsalternative unter der Annahme rationaler Einzelentscheidungen zu einer Verschlechterung der Situation für alle führen kann. Das Paradoxon wurde 1968 vom… …   Deutsch Wikipedia

  • Newcombs Problem — ist ein von Robert Nozick und William Newcomb (†1999; Urgroßneffe von Simon Newcomb) im Jahre 1960 festgestelltes und 1969 publiziertes Problem der Entscheidungstheorie. Inhaltsverzeichnis 1 Die Situation des Gedankenexperiments 2 Modifikationen… …   Deutsch Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”