Recently, developing multi-UAVs to cooperatively pursue a fast-moving target has become a research hotspot in the current world. Although deep reinforcement learning (DRL) has made a lot of achievements in the UAV pursuit game, there are still some problems such as high-dimensional parameter space, the ease of falling into local optimization, the long training time, and the low task success rate. To solve the above-mentioned issues, we propose an improved twin delayed deep deterministic policy gradient algorithm combining the genetic algorithm and maximum mean discrepancy method (GM-TD3) for multi-UAV cooperative pursuit of high-speed targets. Firstly, this paper combines GA-based evolutionary strategies with TD3 to generate action networks. Then, in order to avoid local optimization in the algorithm training process, the maximum mean difference (MMD) method is used to increase the diversity of the policy population in the updating process of the population parameters. Finally, by setting the sensitivity weights of the genetic memory buffer of UAV individuals, the mutation operator is improved to enhance the stability of the algorithm. In addition, this paper designs a hybrid reward function to accelerate the convergence speed of training. Through simulation experiments, we have verified that the training efficiency of the improved algorithm has been greatly improved, which can achieve faster convergence; the successful rate of the task has reached 95%, and further validated UAVs can better cooperate to complete the pursuit game task.