- Alpha-beta pruning
Alpha-beta pruning is a
search algorithm which seeks to reduce the number of nodes that are evaluated in the search tree by the minimax algorithm. It is a search with adversary algorithm used commonly for machine playing of two-player games (Tic-tac-toe ,Chess , Go, etc.). It stops completely evaluating a move when at least one possibility has been found that proves the move to be worse than a previously examined move. Such moves need not be evaluated further. Alpha-beta pruning is a sound optimization in that it does not change the result of the algorithm it optimizes.History
Allen Newell andHerbert Simon who used what John McCarthy calls an "approximation"cite web
author = McCarthy, John
title = Human Level AI Is Harder Than It Seemed in 1955
date = LaTeX2HTML 27 November 2006
url = http://www-formal.stanford.edu/jmc/slides/wrong/wrong-sli/wrong-sli.html
accessdate = 2006-12-20] in 1958 wrote that alpha-beta "appears to have been reinvented a number of times".cite journal
author=Newell, Allen and Herbert A. Simon
title=Computer Science as Empirical Inquiry: Symbols and Search
journal=Communications of the ACM, Vol. 19, No. 3
date=March 1976
url=http://archive.computerhistory.org/projects/chess/related_materials/text/2-3.Computer_science_as_empirical_inquiry/2-3.Computer_science_as_empirical_inquiry.newell_simon.1975.ACM.062303007.pdf
accessdate=2006-12-21]Arthur Samuel had an early version and Richards, Hart, Levine and/or Edwards found alpha-beta independently in theUnited States .cite web
author = Richards, D.J. and Hart, T.P.
title = The Alpha-Beta Heuristic (AIM-030)
publisher = Massachusetts Institute of Technology
date = 4 December 1961 to 28 October 1963
url = http://hdl.handle.net/1721.1/6098
accessdate = 2006-12-21] Fact|date=February 2007 McCarthy proposed similar ideas during theDartmouth Conference in 1956 and suggested it to a group of his students includingAlan Kotok at MIT in 1961.cite web | last=Kotok | first=Alan | title=MIT Artificial Intelligence Memo 41 | date=XHTML 3 December 2004 | url=http://www.kotok.org/AI_Memo_41.html | accessdate=2006-07-01]Alexander Brudno independently discovered the alpha-beta algorithm, publishing his results in 1963. cite web
author = [http://www.cs.ualberta.ca/~tony/ Marsland, T.A.]
title = Computer Chess Methods (PDF) from Encyclopedia of Artificial Intelligence. S. Shapiro (editor)
publisher = J. Wiley & Sons
date = May 1987
pages = 159-171
url = http://www.cs.ualberta.ca/~tony/OldPapers/encyc.mac.pdf
accessdate = 2006-12-21]Donald Knuth and Ronald W. Moore refined the algorithm In 1975* cite journal
author = Knuth, D. E., and Moore, R. W.
title = An Analysis of Alpha-Beta Pruning
journal = Artificial Intelligence Vol. 6, No. 4
date = 1975
pages = 293–326
id =
accessdate = :* Reprinted as Chapter 9 in cite book
last = Knuth
first = Donald E.
title = Selected Papers on Analysis of Algorithms
year = 2000
publisher = Stanford, California: Center for the Study of Language and Information - CSLI Lecture Notes, no. 102
url = http://www-cs-faculty.stanford.edu/~knuth/aa.html
id = ISBN 1-57586-212-3] cite journal
author=Abramson, Bruce
title=Control Strategies for Two-Player Games
journal=ACM Computing Surveys, Vol. 21, No. 2
date=June 1989
url=http://www.theinformationist.com/pdf/constrat.pdf/
accessdate=2008-08-20] and it continued to be advanced.Improvements over naive minimax
The benefit of alpha-beta pruning lies in the fact that branches of the search tree can be eliminated. The search time can in this way be limited to the 'more promising' subtree, and a deeper search can be performed in the same time. Like its predecessor, it belongs to the
branch and bound class of algorithms. The optimization reduces the effective depth to slightly more than half that of simple minimax if the nodes are evaluated in an optimal or near optimal order (best choice for side on move ordered first at each node).With an (average or constant)
branching factor of "b", and a search depth of "d" ply, the maximum number of leaf node positions evaluated (when the move ordering is pessimal) is "O"("b"*"b"*...*"b") = "O"("b""d") – the same as a simple minimax search. If the move ordering for the search is optimal (meaning the best moves always searched first), the number of positions searched is about "O"("b"*1*"b"*1*...*"b") for odd depth and "O"("b"*1*"b"*1*...*1) for even depth, or . In the latter case, the effective branching factor is reduced to itssquare root , or, equivalently, the search can go twice as deep with the same amount of computation. [Russell Norvig 2003] The explanation of "b"*1*"b"*1*... is that all the first player's moves must be studied to find the best one, but for each, only the best second player's move is needed to refute all but the first (and best) first player move – alpha-beta ensures no other second player moves need be considered. If "b"=40 (as in chess), and the search depth is 12 ply, the ratio between optimal and pessimal sorting is a factor of nearly 406 or about 4 billion times.Normally during alpha-beta, the subtrees are temporarily dominated by either a first player advantage (when many first player moves are good, and at each search depth the first move checked by the first player is adequate, but all second player responses are required to try and find a refutation), or vice versa. This advantage can switch sides many times during the search if the move ordering is incorrect, each time leading to inefficiency. As the number of positions searched decreases exponentially each move nearer the current position, it is worth spending considerable effort on sorting early moves. An improved sort at any depth will exponentially reduce the total number of positions searched, but sorting all positions at depths near the root node is relatively cheap as there are so few of them. In practice, the move ordering is often determined by the results of earlier, smaller searches, such as through iterative deepening.
The algorithm maintains two values, alpha and beta, which represent the minimum score that the maximizing player is assured of and the maximum score that the minimizing player is assured of respectively. Initially alpha is negative infinity and beta is positive infinity. As the recursion progresses the "window" becomes smaller. When beta becomes less than alpha, it means that the current position cannot be the result of best play by both players and hence need not be explored further.
Pseudocode function alphabeta(node, depth, α, β) "(* β represents previous player best choice - doesn't want it if α would worsen it *)" if node is a terminal node or depth = 0 return the heuristic value of node foreach child of node α := max(α, -alphabeta(child, depth-1, -β, -α)) "(* use symmetry, -β becomes subsequently pruned α *)" if β≤α break "(* Beta cut-off *)" return α "(* Initial call *)" alphabeta(origin, depth, -inf, +inf)
Heuristic improvements
Further improvement can be achieved without sacrificing accuracy, by using ordering
heuristic s to search parts of the tree that are likely to force alpha-beta cutoffs early. For example, in chess, moves that take pieces may be examined before moves that do not, or moves that have scored highly in earlier passes through the game-tree analysis may be evaluated before others. Another common, and very cheap, heuristic is thekiller heuristic , where the last move that caused a beta-cutoff at the same level in the tree search is always examined first. This idea can be generalized into a set ofrefutation table s.Alpha-beta search can be made even faster by considering only a narrow search window (generally determined by guesswork based on experience). This is known as "aspiration search". In the extreme case, the search is performed with alpha and beta equal; a technique known as "zero-window search", "null-window search", or "scout search". This is particularly useful for win/loss searches near the end of a game where the extra depth gained from the narrow window and a simple win/loss evaluation function may lead to a conclusive result. If an aspiration search fails, it is straightforward to detect whether it failed "high" (high edge of window was too low) or "low" (lower edge of window was too high). This gives information about what window values might be useful in a re-search of the position.
Other algorithms
More advanced algorithms that are even faster while still being able to compute the exact minimax value are known, such as
Negascout andMTD-f .Since the minimax algorithm and its variants are inherently depth-first, a strategy such as iterative deepening is usually used in conjunction with alpha-beta so that a reasonably good move can be returned even if the algorithm is interrupted before it has finished execution. Another advantage of using iterative deepening is that searches at shallower depths give move-ordering hints that can help produce cutoffs for higher depth searches much earlier than would otherwise be possible.
Algorithms like
SSS* , on the other hand, use the best-first strategy. This can potentially make them more time-efficient, but typically at a heavy cost in space-efficiency. Fact|date=February 2007See also
*
Pruning (algorithm)
*Branch and bound
*Minimax
*Combinatorial optimization
*Negamax
*Transposition table
*MTD(f)
*Negascout
*Killer heuristic References
External links
* http://www.emunix.emich.edu/~evett/AI/AlphaBeta_movie/sld001.htm
* http://sern.ucalgary.ca/courses/CPSC/533/W99/presentations/L1_5B_McCullough_Melnyk/
* http://sern.ucalgary.ca/courses/CPSC/533/W99/presentations/L2_5B_Lima_Neitz/search.html
* http://www.maths.nott.ac.uk/personal/anw/G13GAM/alphabet.html
* http://chess.verhelst.org/search.html
* http://www.frayn.net/beowulf/index.html
* http://hal.inria.fr/docs/00/12/15/16/PDF/RR-6062.pdf
* [http://wolfey.110mb.com/GameVisual/launch.php?agent=2 Minimax (with or without alpha-beta pruning) algorithm visualization - game tree solving (Java Applet)]
Wikimedia Foundation. 2010.