Abstract

In the growing concerns about data privacy and increasingly stringent data security regulations, it is not feasible to directly mine data or share data if the dataset contains private data. Collecting and analyzing data from multiple parties becomes difficult. Federated learning can analyze multiple datasets while preventing the original data from being sent. However, existing federated learning frameworks are based on the Apriori property of mining frequent patterns, which has the disadvantage of low efficiency and multiple scanning datasets. Therefore, to improve mining efficiency, a federated learning framework (named FedFIM) is proposed in this paper. FedFIM collects the noisy responses sent by participants, which are used by the server to reconstruct the noisy dataset. After that, the noisy dataset is applied to the non-Apriori algorithm to mine frequent patterns. In addition, FedFIM incorporates a differential privacy-preserving mechanism into federated learning, which addresses the need for federated modeling and protects data privacy. Experiments show that FedFIM has a shorter running time and better applicability compared to the most advanced benchmark.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call