- Baby-step giant-step
In
group theory , a branch of mathematics, the baby-step giant-stepalgorithm is a series of well-defined steps to compute thediscrete logarithm . The discrete log problem is of fundamental importance to the area ofpublic key cryptography . Many of the most commonly used cryptography systems are based on the assumption that the discrete log is extremely difficult to compute; the more difficult it is, the more security it provides a data transfer. One way to increase the difficulty of the discrete log problem is to base the cryptosystem on a larger group.Theory
The algorithm is based on a
space-time tradeoff . It is a fairly simple modification oftrial multiplication , the naive method of finding discrete logarithms.Given a
cyclic group of order , a generator of the group and a group element , the problem is to find an integer such that: The baby-step giant-step algorithm is based on rewriting as , with and and . Therefore, we have::The algorithm precomputes for several values of . Then it fixes an and tries values of in the left-hand side of the congruence above, in the manner of trial multiplication. It tests to see if the congruence is satisfied for any value of , using the precomputed values of .
The algorithm
Input: A cyclic group "G" of order "n", having a generator α and an element β.
Output: A value "x" satisfying .
# "m" ← Ceiling(√"n")
# For all "j" where 0 ≤ "j" < "m":
## Compute α"j" and store the pair ("j", α"j") in a table. (See section "In practice")
# Compute α−"m".
# γ ← β.
# For "i" = 0 to ("m" − 1):
## Check to see if γ is the second component (α"j") of any pair in the table.
## If so, return "im" + "j".
## If not, γ ← γ • α−"m".In practice
The best way to speed up the baby-step giant-step algorithm is to use an efficient table lookup scheme. The best in this case is a
hash table . The hashing is done on the second component, and to perform the check in step 1 of the main loop, γ is hashed and the resulting memory address checked. Since hash tables can retrieve and add elements in O(1) time (constant time), this does not slow down the overall baby-step giant-step algorithm.The running time of the algorithm and the space complexity is O(√"n").
Notes
* The baby-step giant-step algorithm is a generic algorithm. It works for every finite cyclic group.
* It is not necessary to know the order of the group "G" in advance. The algorithm still works if "n" is merely an upper bound on the group order.
* Usually the baby-step giant-step algorithm is used for groups whose order is prime. If the order of the group is composite then thePohlig-Hellman algorithm is more efficient.
* The algorithm requires O("m") memory. It is possible to use less memory by choosing a smaller "m" in the first step of the algorithm. Doing so increases the running time, which then is O("n"/"m"). Alternatively one can usePollard's rho algorithm for logarithms , which has about the same running time as the baby-step giant-step algorithm, but only a small memory requirement.
* The algorithm was originally developed byDaniel Shanks .References
*H. Cohen, A course in computational algebraic number theory, Springer, 1996.
*D. Shanks. Class number, a theory of factorization and genera. In Proc. Symp. Pure Math. 20, pages 415--440. AMS, Providence, R.I., 1971.
*A. Stein and E. Teske, Optimized baby step-giant step methods, Journal of the Ramanujan Mathematical Society 20 (2005), no. 1, 1–32.
*A. V. Sutherland, Order computations in generic groups, PhD thesis, M.I.T., 2007, http://groups.csail.mit.edu/cis/theses/sutherland-phd.pdf.
*D. C. Terr, A modification of Shanks’ baby-step giant-step algorithm, Mathematics of Computation 69 (2000), 767–773.
Wikimedia Foundation. 2010.