Abstract

Federated learning (FL) has emerged as a promising framework to exploit massive data generated by edge devices in developing a common learning model while preserving the privacy of local data. In implementing FL over wireless networks, the participation of more devices is encouraged to alleviate the training inefficiency due to irregular local data but it tends to increase communication latency. To solve this problem, we address non-orthogonal multiple access (NOMA) assisted by intelligent reflecting surfaces (IRSs) to accommodate more devices and tailor their channels favorably to the FL performance. For the FL with IRS-NOMA, we minimize the total latency by reducing the latency per training round dominated by local computation and uplink communication through optimization of IRS-NOMA strategies and improving the training efficiency under irregular local data through active device selection. We then propose an auction-based IRS allocation that utilizes the optimized total latency for the valuation of the IRSs when multiple base stations of different operators share their neighboring IRSs. Winner determination (WD) and payment methods are devised with multiple bids on IRS subsets in a way of maximizing social welfare. The results show that the proposed latency minimizing algorithm outperforms the benchmarks by improving both communication and training efficiency through device selection combined with IRS-NOMA optimization. In addition, the auction mechanism with the proposed WD outperforms the benchmarks, where the social welfare is improved by constructing each bid with the valuation on multiple IRSs and increasing the number of bids submitted.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call