Abstract

The ever-increasing use of Artificial Intelligence applications has made apparent that the quality of the training datasets affects the performance of the models. To this end, Federated Learning aims to engage multiple entities to contribute to the learning process with locally maintained data, without requiring them to share the actual datasets. Since the parameter server does not have access to the actual training datasets, it becomes challenging to offer rewards to users by directly inspecting the dataset quality. Instead, this paper focuses on ways to strengthen user engagement by offering “fair” rewards, proportional to the model improvement (in terms of accuracy) they offer. Furthermore, to enable objective judgment of the quality of contribution, we devise a point system to record user performance assisted by blockchain technologies. More precisely, we have developed a verification algorithm that evaluates the performance of users’ contributions by comparing the resulting accuracy of the global model against a verification dataset and we demonstrate how this metric can be used to offer security improvements in a Federated Learning process. Further on, we implement the solution in a simulation environment in order to assess the feasibility and collect baseline results using datasets of varying quality.

Highlights

  • Federated Learning (FL) is a subfield of Machine Learning, first introduced by Google in 2016 [1]

  • The main contributions of this paper can be summarized as follows: a) It extends the capabilities of our model update verification algorithm in order to provide a metric proportional to the model update contributions and increase fairness in reward allocation. b) it describes the implementation of the aforementioned verification algorithm in a simulation environment in order to verify feasibility and provide baseline results, based on previous work regarding the specifications of executing a Federated Learning process within a smart contract [26]

  • The main steps involved in the process are a) the model weights are fused with the global model weights, b) the derived model weights are evaluated against a verification data set and the difference of accuracy is recorded for reward calculation, c) if the accuracy increases, the specific model update is saved, otherwise it is discarded

Read more

Summary

INTRODUCTION

Federated Learning (FL) is a subfield of Machine Learning, first introduced by Google in 2016 [1]. Potential for incentives or rewards: In a Federated Learning setup, it is important to incentivize users that contribute with quality data This is crucial as the accuracy of the global model is directly proportional to the quality of the training data. The advantages include incentives for devices that contribute to the training process with larger amounts of data, as well as solving the single point of failure in the case of a central server outage They study the effects of a miner’s malfunction, imposed energy constraints and number of participating devices in respect to end-to-end latency and robustness. B) it describes the implementation of the aforementioned verification algorithm in a simulation environment in order to verify feasibility and provide baseline results, based on previous work regarding the specifications of executing a Federated Learning process within a smart contract [26] The main contributions of this paper can be summarized as follows: a) It extends the capabilities of our (previous) model update verification algorithm (presented in [25]) in order to provide a metric proportional to the model update contributions and increase fairness in reward allocation. b) it describes the implementation of the aforementioned verification algorithm in a simulation environment in order to verify feasibility and provide baseline results, based on previous work regarding the specifications of executing a Federated Learning process within a smart contract [26]

THE REWARDING ALGORITHM
Principle of Operation
Algorithm 1 – Reward Calculation inside the FL process
SMART CONTRACT SPECIFICATIONS
TESTBED
Findings
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call