Data security and user privacy concerns are receiving increasing attention. Federated learning models based on differential privacy offer a distributed machine learning framework that protects data privacy. However, the noise introduced by the differential privacy mechanism may affect the model’s usability, especially when reasonable gradient clipping is absent. Fluctuations in the gradients can lead to issues like gradient explosion, compromising training stability and potentially leaking privacy. Therefore, gradient clipping has become a crucial method for protecting both model performance and data privacy. To balance privacy protection and model performance, we propose the Adaptive Weight-Based Differential Privacy Federated Learning (AWDP-FL) framework, which processes model gradient parameters at the neural network layer level. First, by designing and recording the change trends of two-layer historical gradient sequences, we analyze and predict gradient variations in the current iteration and calculate the corresponding weight values. Then, based on these weights, we perform adaptive gradient clipping for each data point in each training batch, which is followed by gradient momentum updates based on the third moment. Before uploading the parameters, Gaussian noise is added to protect privacy while maintaining model accuracy. Theoretical analysis and experimental results validate the effectiveness of this framework under strong privacy constraints.