Abstract

The Lenstra-Lenstra-Lovasz lattice basis reduction algorithm (called LLL or ${\rm L}^3$) is a fundamental tool in computational number theory and theoretical computer science, which can be viewed as an efficient algorithmic version of Hermite's inequality on Hermite's constant. Given an integer $d$-dimensional lattice basis with vectors of Euclidean norm less than $B$ in an $n$-dimensional space, the ${\rm L}^3$ algorithm outputs a reduced basis in $O(d^3n\,{\rm log}\,B\cdot\mathcal{M}(d\,{\rm log}\,B))$ bit operations, where $\mathcal{M}(k)$ denotes the time required to multiply $k$-bit integers. This worst-case complexity is problematic for applications where $d$ or/and ${\rm log}\,B$ are often large. As a result, the original ${\rm L}^3$ algorithm is almost never used in practice, except in tiny dimension. Instead, one applies floating-point variants where the long-integer arithmetic required by Gram-Schmidt orthogonalization is replaced by floating-point arithmetic. Unfortunately, this is known to be unstable in the worst case: the usual floating-point ${\rm L}^3$ algorithm is not even guaranteed to terminate, and the output basis may not be ${\rm L}^3$-reduced at all. In this article, we introduce the ${\rm L}^2$ algorithm, a new and natural floating-point variant of the ${\rm L}^3$ algorithm which provably outputs ${\rm L}^3$-reduced bases in polynomial time $O(d^2n(d+{\rm log}\,B)\,{\rm log}\,B\cdot\mathcal{M}(d))$. This is the first ${\rm L}^3$ algorithm whose running time (without fast integer arithmetic) provably grows only quadratically with respect to ${\rm log}\,B$, like Euclid's gcd algorithm and Lagrange's two-dimensional algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call