Generative adversarial networks (GANs) have been advancing and gaining tremendous interests from both academia and industry. With the development of wireless technologies, a huge amount of data generated at the network edge provides an unprecedented opportunity to develop GANs applications. However, due to the constraints such as bandwidth, privacy, and legal issues, it is inappropriate to collect and send all data to the cloud or servers for analysis, training, and mining. Thus, deploying and training GANs at the edge becomes a promising alternative solution. The instability of GANs introduced by non-independent and identical data (Non-IID) poses significant challenges to training GANs. To address these challenges, this paper presents a novel federated learning framework for GANs, namely, <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">C</u> ollaborated g <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">A</u> me <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">P</u> arallel Learning (CAP). CAP supports parallel training of data and models for GANs, breaking the isolated training among generators that exists in the previous distributed algorithms, and achieving collaborative learning among cloud, edge servers, and devices. Then, to further enhance the ability of CAP-GAN for addressing Non-IID issues, we propose a Mix-Generator module (Mix-G) which divides a generator into the sharing layer and personalizing layer. The Mix-G module extracts the generic and personalization features and improves the performance of CAP-GAN on extremely personalizing datasets. Experimental results and analysis substantiate the usefulness and superiority of our proposed CAP-GAN scheme which can achieve better results in the Non-IID scenarios compared with the state-of-the-art algorithms.
Read full abstract