Abstract

This paper describes the first algorithm to compute the greatest common divisor (GCD) of two n-bit integers using a modular representation for intermediate values U, V and also for the result. It is based on a reduction step, similar to one used in the accelerated algorithm [T. Jebelean, A generalization of the binary GCD algorithm, in: ISSAC '93: International Symposium on Symbolic and Algebraic Computation, Kiev, Ukraine, 1993, pp. 111–116; K. Weber, The accelerated integer GCD algorithm, ACM Trans. Math. Softw. 21 (1995) 111–122] when U and V are close to the same size, that replaces U by ( U − b V ) / p , where p is one of the prime moduli and b is the unique integer in the interval ( − p / 2 , p / 2 ) such that b ≡ U V −1 ( mod p ) . When the algorithm is executed on a bit common CRCW PRAM with O ( n log n log log log n ) processors, it takes O ( n ) time in the worst case. A heuristic model of the average case yields O ( n / log n ) time on the same number of processors.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call