Abstract

<underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">G</u> raph <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">N</u> eural <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">N</u> etwork (GNN) has achieved great success in the field of graph data processing and analysis, but the design of GNN architecture is difficult and time-consuming. To reduce the development cost of GNNs, recently, some <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">GNN</u> <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">N</u> eural <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">A</u> rchitecture <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">S</u> earch (GNN NAS) techniques are presented for the automatic design of GNN architectures. These techniques bring great convenience to the use of GNN, but cannot be applied to the federated learning scenarios. They only consider the single-source graph dataset, while failing to deal with the distributed and private graph datasets, which limits their applications. To address this shortcoming, in this paper we propose FL-AGNNS, an efficient GNN NAS algorithm which enables distributed agents to cooperatively design powerful GNN models while keeping personal information on local devices. FL-AGNNS designs a novel federated evolutionary optimization strategy. This strategy can fully consider the GNN architectures favored by each client, thus recommend GNN architectures that perform well in multiple datasets. In additions, FL-AGNNS applies the GNN super-network, a weight sharing strategy, to speed up the evaluation of GNN models during the search phase. Extensive experimental results show that FL-AGNNS can recommend better GNN models in short time under the federated learning framework, surpassing the state-of-the-arts GNN models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call