Abstract

Federated learning has surfaced as a proficient strategy for developing distributed machine learning models, leveraging decentralized data sources. This approach negates the requirement for centralizing raw data, thereby maintaining privacy and harnessing the strength of local data. Nevertheless, non-uniform performance and biases in the algorithm across different clients can adversely affect the results. To tackle these challenges, we introduce a method named Fair Federated Learning with Opposite Generative Adversarial Networks (FFL-OppoGAN), a method that leverages Opposite Generative Adversarial Networks (OppoGAN) to generate synthetic tabular datasets and incorporate them into federated learning to improve fairness and consistency. By adding synthetic data that minimizes algorithmic discrimination and adjusting the learning process to promote uniform performance among clients, our method ensures a more equitable learning process. We evaluated the effectiveness of FFL-OppoGAN on the Adult and Dutch datasets, chosen for their relevance to our study. The results demonstrate that our method successfully enhances algorithmic fairness and performance consistency. It achieves superior results compared to baseline methods. In conclusion, FFL-OppoGAN offers a robust solution for fair and consistent federated learning, setting a promising precedent for future federated learning systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call