Stochastic gradient descent (SGD) is widely applied to machine learning tasks. However, the existence of gradient variance degrades the convergence performance of SGD, which cannot be tackled well for tasks with one-time data sampling per sample. In this paper, a kernel reproduced gradient descent algorithm is proposed where the gradient of the risk function is estimated by utilizing the Reproducing Kernel Hilbert Space (RKHS) theory. Meanwhile, to ensure a low storage requirement, a kernel function linearization strategy is used to design the polynomial form of the kernel learning algorithm. It is shown that the error resulting from linearization decreases as the model parameter size increases, and the proposed polynomial-like kernel reproduced gradient descent (PKRGD) algorithm can converge faster than SGD. From experiments on models such as the latest image matching model, LightGlue, it is verified that the reproduced gradients have lower variances, and existing optimizers using the reproduced gradients can further accelerate model training.