Abstract

Federated Learning (FL), as a privacy-preserving distributed machine learning paradigm, has become a promising privacy computing framework for increasingly complex network systems. To incentivize more data owners to participate, it is important for service providers to fairly evaluate each data owner’s contribution to the FL training process and compensate for the reward. In this context, Shapley value, a classical data evaluation scheme, efficient collaboration with FL, namely Federated Shapley Value (FedSV), becomes an effective solution. Despite its potential, this new approach also faces severe computational overhead, privacy, and fairness challenges in the FL setting, where FL involves thousands of devices and resources are constrained. To this end, in this paper, we propose an affordable federated edge learning framework by designing an efficient Shapley value estimation approach. Furthermore, we survey recent efforts in the collaboration of FL and Shapley value and identify future research directions for researchers and practitioners to further investigate this emerging topic. Last, we conduct a thorough empirical study of FedSV on a series of tasks and integrate the existing benchmark schemes into the open-source function package FedSHAP.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call