Precision matrices can efficiently exhibit the correlation between variables and they have received much attention in recent years. When one encounters large datasets stored in different locations and when data sharing is not allowed, the implementation of high-dimensional precision matrix estimation can be numerically challenging or even infeasible. In this work, we studied distributed sparse precision matrix estimation via an alternating block-based gradient descent method. We obtained a global model by aggregating each machine’s information via a communication-efficient surrogate penalized likelihood. The procedure chooses the block coordinates using the local gradient, to guide the global gradient updates, which can efficiently accelerate precision estimation and lessen communication loads on sensors. The proposed method can efficiently achieve the correct selection of non-zero elements of a sparse precision matrix. Under mild conditions, we show that the proposed estimator achieved a near-oracle convergence rate, as if the estimation had been conducted with a consolidated dataset on a single computer. The promising performance of the method was supported by both simulated and real data examples.