This paper proposes a differential privacy decentralized zeroth-order gradient tracking optimization (DP-DZOGT) algorithm for solving optimization problems of decentralized systems, where the gradient information of the function is unknown. To address the challenge of unknown gradient information, a one-point zeroth-order gradient estimator (OPZOGE) is constructed, which can estimate the gradient based on the function value and guide the update of decision variables. Additionally, to prevent privacy leakage of agents, random noise is introduced into both the state and the gradient of the agents, which effectively enhances the level of privacy protection. The linear convergence of the proposed DP-DZOGT under a fixed step size can be guaranteed. Moreover, it has been applied to the fields of smart grid (SG) and decentralized federated learning (DFL). Finally, the effectiveness of the algorithm is validated through three numerical simulations.