Abstract

AbstractFederated learning (FL) serves as a decentralized training framework for machine learning (ML) models, preserving data privacy in critical domains such as smart healthcare. However, it has been found that attackers can exploit this decentralized learning framework to perform data and model poisoning attacks, specifically in FL‐driven smart healthcare. This work delves into the realm of FL‐driven smart healthcare systems, consisting of multiple hospitals based architecture and focusing on heart disease detection using FL. We carry out data poisoning attacks, using two different attacking methods, label flipping attack and input data/feature manipulation attack to demonstrate that such FL‐driven smart healthcare systems are vulnerable to attacks. To guard the system against such attack, we propose a novel federated averaging defense mechanism to stop the identified poisoned clients in weight aggregation. This mechanism effectively detects and thwarts data poisoning attempts by identifying compromised clients during weight aggregation. The proposed mechanism is based on the idea of weighted averaging, where each client's contribution is weighted according to its trustworthiness. The proposed work addresses a critical gap in the literature by focusing on the often‐overlooked issue of poisoning attacks in tabular text datasets, which are crucial to the smart healthcare system. We conduct the testbed‐based experiment to demonstrate that the proposed mechanism is effectively detecting and mitigating data poisoning attacks in selected FL‐driven smart healthcare scenarios, while maintaining high accuracy and convergence rates.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call