This paper studies distributed optimization problem over a fixed network. We develop and analyze an accelerated distributed gradient descent method, named Acc-DGDlm, which utilizes gradient tracking technique and local memory. Specifically, we add two memory slots per agent to store two past estimates, namely, an estimate of the optimal solution and an estimate of the average gradient. For strongly convex and smooth functions, Acc-DGDlm achieves a linear convergence rate of O(Ck) for some constant 0<C<1 when the fixed stepsize is sufficiently small and the coefficient θ of past variables satisfies 0≤θ<1. Compared to the related works where both the stepsize and the momentum coefficient should belong to intervals determined by global parameters, we eliminate the dependence of θ on global parameters, which makes θ easy to be chosen in practice. We also provide a theoretical analysis showing that including local memory can decrease the convergence factor C and thus speed up the convergence. Besides, numerical experiments with distributed estimation problems show that Acc-DGDlm converges faster in comparison with state-of-the-art methods, especially for sparse networks.