Abstract

AbstractThe federated learning has gained prominent attention as a collaborative machine learning method, allowing multiple users to jointly train a shared model without directly exchanging raw data. This research addresses the fundamental challenge of balancing data privacy and utility in distributed learning by introducing an innovative hybrid methodology fusing differential privacy with federated learning (HDP‐FL). Through meticulous experimentation on EMNIST and CIFAR‐10 data sets, this hybrid approach yields substantial advancements, showcasing a noteworthy 4.22% and up to 9.39% enhancement in model accuracy for EMNIST and CIFAR‐10, respectively, compared to conventional federated learning methods. Our adjustments to parameters highlighted how noise impacts privacy, showcasing the effectiveness of our hybrid DP approach in striking a balance between privacy and accuracy. Assessments across diverse FL techniques and client counts emphasized this trade‐off, particularly in non‐IID data settings, where our hybrid method effectively countered accuracy declines. Comparative analyses against standard machine learning and state‐of‐the‐art FL approaches consistently showcased the superiority of our proposed model, achieving impressive accuracies of 96.29% for EMNIST and 82.88% for CIFAR‐10. These insights offer a strategic approach to securely collaborate and share knowledge among IoT devices without compromising data privacy, ensuring efficient and reliable learning mechanisms across decentralized networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call