Abstract

Federated learning (FL) represents a distributed machine learning approach that eliminates the necessity of transmitting privacy-sensitive local training samples. However, within wireless FL networks, resource heterogeneity introduces straggler clients, thereby decelerating the learning process. Additionally, the learning process is further slowed due to the non-independent and identically distributed (non-IID) nature of local training samples. Coupled with resource constraints during the learning process, there arises an imperative need for optimizing client selection and resource allocation strategies to mitigate these challenges. While numerous studies have made strides in this regard, few have considered the joint optimization of client selection and computational power (i.e., CPU frequency) for both clients and the edge server during each global iteration. In this paper, we initially define a cost function encompassing learning latency and non-IID characteristics. Subsequently, we pose a joint client selection and CPU frequency control problem that minimizes the time-averaged cost function subject to long-term power constraints. By utilizing Lyapunov optimization theory, the long-term optimization problem is transformed into a sequence of short-term problems. Finally, an algorithm is proposed to determine the optimal client selection decision and corresponding optimal CPU frequency for both the selected clients and the server. Theoretical analysis provides performance guarantees and our simulation results substantiate that our proposed algorithm outperforms comparative algorithms in terms of test accuracy while maintaining low power consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call