Lattice reduction is a geometric generalization of the problem of computing greatest common divisors. Most of the interesting algorithmic problems related to lattice reduction are NP-hard as the lattice dimension increases. This article deals with the low-dimensional case. We study a greedy lattice basis reduction algorithm for the Euclidean norm, which is arguably the most natural lattice basis reduction algorithm because it is a straightforward generalization of an old two-dimensional algorithm of Lagrange, usually known as Gauss' algorithm, and which is very similar to Euclid's gcd algorithm. Our results are twofold. From a mathematical point of view, we show that up to dimension four, the output of the greedy algorithm is optimal: The output basis reaches all the successive minima of the lattice. However, as soon as the lattice dimension is strictly higher than four, the output basis may be arbitrarily bad as it may not even reach the first minimum. More importantly, from a computational point of view, we show that up to dimension four, the bit-complexity of the greedy algorithm is quadratic without fast integer arithmetic, just like Euclid's gcd algorithm. This was already proved by Semaev up to dimension three using rather technical means, but it was previously unknown whether or not the algorithm was still polynomial in dimension four. We propose two different analyzes: a global approach based on the geometry of the current basis when the length decrease stalls, and a local approach showing directly that a significant length decrease must occur every O (1) consecutive steps. Our analyzes simplify Semaev's analysis in dimensions two and three, and unify the cases of dimensions two to four. Although the global approach is much simpler, we also present the local approach because it gives further information on the behavior of the algorithm.