Federated learning (FL) stands as a promising distributed machine learning approach today, allowing model training on local clients without data sharing. However, varied contributions among clients often impact the global model’s performance, while FL confronts challenges like communication overhead, data heterogeneity, and privacy concerns. In this paper, we introduce a novel federated learning server aggregation algorithm: the Federated Learning Algorithm with Optimized Weight Aggregation via Particle Swarm Optimization Algorithm (AdpFedPSO). This approach dynamically adjusts client model contribution weights based on their performance and stability, aiming to enhance global model accuracy, convergence speed, and make model aggregation smarter and more adaptable. Through experimental validation on real datasets, we find that AdpFedPSO enhances accuracy by about 15% on the 0.6-Dirichlet MNIST dataset, 7.3% on the FashionMNIST dataset, and 13.4% on the CIFAR-10 dataset compared to the traditional FL aggregation method, FedAvg. These results indicate that AdpFedPSO not only enhances the accuracy of the global model but also expedites the model’s convergence speed. Additionally, it demonstrates resilience across various levels of client numbers, participation rates, and client heterogeneity, providing valuable reference and guidance for the further development of FL technology. Moreover, the concept of employing the particle swarm optimization algorithm for FL model aggregation also offers new insights and directions for future research.