For the centralized optimization, it is well known that adding one momentum term (also called the heavy-ball method) can obtain a faster convergence rate than the gradient method. However, for the distributed counterpart, there is quite few results about the effect of added momentum terms on the convergence rate. This article is aimed at studying the issue in the distributed setup, where N agents minimize the sum of their individual cost functions using local communication over a network. The cost functions are twice continuously differentiable. We first study the algorithm with one momentum term and develop a distributed heavy-ball (D-HB) method by adding one momentum term on to the distributed gradient algorithm. By borrowing tools from the control theory, we provide a simple convergence proof and an explicit expression of the optimal convergence rate. Furthermore, we consider adding two momentum terms case and propose a distributed double-heavy-ball (D-DHB) method. We show that adding one momentum term allows faster convergence while adding two momentum terms does not perform any superiorities. Finally, simulation examples are given to illustrate our findings.