- Dixon's factorization method
-
In number theory, Dixon's factorization method (also Dixon's random squares method[1] or Dixon's algorithm) is a general-purpose integer factorization algorithm; it is the prototypical factor base method, and the only factor base method for which a run-time bound not reliant on conjectures about the smoothness properties of values of a polynomial is known.
The algorithm was designed by John D. Dixon, a mathematician at Carleton University, and was published in 1981.[2]
Contents
Basic idea
Dixon's method is based on finding a congruence of squares modulo the integer N which we intend to factor. Fermat's factorization algorithm finds such a congruence by selecting random or pseudo-random x values and hoping that the integer x2 mod N is a perfect square (in the integers):
For example, if N = 84923, we notice (by starting at 292, the first number greater than √N and counting up) that 5052 mod 84923 is 256, the square of 16. So (505 − 16)(505 + 16) = 0 mod 84923. Computing the greatest common divisor of 505 − 16 and N using Euclid's algorithm gives us 163, which is a factor of N.
In practice, selecting random x values will take an impractically long time to find a congruence of squares, since there are only √N squares less than N.
Dixon's method replaces the condition "is the square of an integer" with the much weaker one "has only small prime factors"; for example, there are 292 squares less than 84923; 662 numbers less than 84923 whose prime factors are only 2,3,5 or 7; and 4767 whose prime factors are all less than 30. (Such numbers are called B-smooth with respect to some bound B.)
If we have lots of numbers whose squares can be factorized as for a fixed set of small primes, linear algebra modulo 2 on the matrix eij will give us a subset of the ai whose squares combine to a product of small primes to an even power — that is, a subset of the ai whose squares multiply to the square of a (hopefully different) number mod N.
Method
Suppose we are trying to factor the composite number N. We choose a bound B, and identify the factor base (which we will call P), the set of all primes less than or equal to B. Next, we search for positive integers z such that z2 mod N is B-smooth. We can therefore write, for suitable exponents ak,
When we have generated enough of these relations (it's generally sufficient that the number of relations be a few more than the size of P), we can use the methods of linear algebra (for example, Gaussian elimination) to multiply together these various relations in such a way that the exponents of the primes on the right-hand side are all even:
This gives us a congruence of squares of the form a2 ≡ b2 (mod N), which can be turned into a factorization of N, N = gcd(a + b, N) × (N/gcd(a + b, N)). This factorization might turn out to be trivial (i.e. N = N × 1), which can only happen if a ≡ ±b (mod N), in which case we have to try again with a different combination of relations; but with luck we will get a nontrivial pair of factors of N, and the algorithm will terminate.
Example
We will try to factor N = 84923 using bound B = 7. Our factor base is then P = {2, 3, 5, 7}. We then search randomly for integers between and N whose squares are B-smooth. Suppose that two of the numbers we find are 513 and 537:
So
Then
That is,
The resulting factorization is 84923 = gcd(20712 − 16800, 84923) × gcd(20712 + 16800, 84923) = 163 × 521.
Optimizations
The quadratic sieve is an optimization of Dixon's method. It selects values of x close to the square root of N such that x2 modulo N is small, thereby largely increasing the chance of obtaining a smooth number.
Other ways to optimize Dixon's method include using a better algorithm to solve the matrix equation, taking advantage of the sparsity of the matrix: a number z cannot have more than log 2z factors, so each row of the matrix is almost all zeros. In practice, the block Lanczos algorithm is often used. Also, the size of the factor base must be chosen carefully: if it is too small, it will be difficult to find numbers that factorize completely over it, and if it is too large, more relations will have to be collected.
A more sophisticated analysis, using the approximation that a number has all its prime factors less than N1 / a with probability about a − a (an approximation to the Dickman–de Bruijn function), indicates that choosing too small a factor base is much worse than too large, and that the ideal factor base size is some power of .
The optimal complexity of Dixon's method is
in big-O notation, or
in L-notation.
References
- ^ Kleinjung, Thorsten; et al. (2010). "Factorization of a 768-bit RSA modulus". Advances in Cryptology – CRYPTO 2010. Lecture Notes in Computer Science. 6223. pp. 333–350. doi:10.1007/978-3-642-14623-7_18.
- ^ Dixon, J. D. (1981). "Asymptotically fast factorization of integers". Math. Comp. 36 (153): 255–260. doi:10.1090/S0025-5718-1981-0595059-1. JSTOR 2007743.
Primality tests AKS · APR · Baillie–PSW · ECPP · Elliptic curve · Pocklington · Fermat · Lucas · Lucas–Lehmer · Lucas–Lehmer–Riesel · Proth's theorem · Pépin's · Solovay–Strassen · Miller–Rabin · Trial divisionSieving algorithms Integer factorization algorithms CFRAC · Dixon's · ECM · Euler's · Pollard's rho · p − 1 · p + 1 · QS · GNFS · SNFS · rational sieve · Fermat's · Shanks' square forms · Trial division · Shor'sMultiplication algorithms Ancient Egyptian multiplication · Karatsuba algorithm · Toom–Cook multiplication · Schönhage–Strassen algorithm · Fürer's algorithmDiscrete logarithm algorithms Baby-step giant-step · Pollard rho · Pollard kangaroo · Pohlig–Hellman · Index calculus · Function field sieveGCD algorithms Modular square root algorithms Cipolla · Pocklington's · Tonelli–ShanksOther algorithms Italics indicate that algorithm is for numbers of special forms; bold indicates deterministic algorithm for primality tests (current article is always in bold). Categories:- Integer factorization algorithms
Wikimedia Foundation. 2010.