Federated learning is a method for training models in a distributed environment where each client utilizes its local dataset to train a model and shares it with a server to create a global model. This approach offers various advantages, such as eliminating the need for local clients to directly transfer their data to the server. However, it also has limitations, including the potential for limited global model performance due to imbalanced local datasets. In this study, we propose a new model that evaluates the quality of each client’s local dataset based on its probability distribution and quantity. Based on this evaluation, the contribution of locally trained models to the global model is adjusted accordingly. Unlike previous studies, our proposed method addresses the issue of low-quality local models degrading the performance of the global model. This allows for the improvement of the global model’s performance without the need for complex processes, such as adding specially designed loss functions to local models or employing data augmentation techniques to enhance the quality of the dataset. Furthermore, the method of calculating the contribution of local models proposed in this study allows for the quantitative assessment of the performance of clients participating in federated learning based on their data collection capabilities, making it an industry-friendly approach. We conducted experiments using various benchmark datasets, simulating different scenarios that could occur in the real world. In most scenarios, except a few, our method showed superior performance on all evaluation metrics compared to existing studies reviewed in the paper.