Federated learning (FL) can break the problem of data silos and allow multiple data owners to collaboratively train shared machine learning models without disclosing local data in mobile edge computing. However, how to incentivize these clients to actively participate in training and ensure efficient convergence and high test accuracy of the model has become an important issue. Traditional methods often use a reverse auction framework but ignore the consideration of client volatility. This paper proposes a multi-dimensional reverse auction mechanism (MRATR) that considers the uncertainty of client training time and reputation. First, we introduce reputation to objectively reflect the data quality and training stability of the client. Then, we transform the goal of maximizing social welfare into an optimization problem, which is proven to be NP-hard. Then, we propose a multi-dimensional auction mechanism MRATR that can find the optimal client selection and task allocation strategy considering clients’ volatility and data quality differences. The computational complexity of this mechanism is polynomial, which can promote the rapid convergence of FL task models while ensuring near-optimal social welfare maximization and achieving high test accuracy. Finally, the effectiveness of this mechanism is verified through simulation experiments. Compared with a series of other mechanisms, the MRATR mechanism has faster convergence speed and higher testing accuracy on both the CIFAR-10 and IMAGE-100 datasets.
Read full abstract