Abstract

Neural architecture search (NAS) has seen significant attention throughout the computational intelligence research community and has pushed forward the state-of-the-art of many neural models to address grid-like data such as texts and images. However, little work has been done on Graph Neural Network (GNN) models dedicated to unstructured network data. Given the huge number of choices and combinations of components such as aggregators and activation functions, determining the suitable GNN model for a specific problem normally necessitates tremendous expert knowledge and laborious trials. In addition, the moderate change of hyperparameters such as the learning rate and dropout rate would dramatically impact the learning capacity of a GNN model. In this paper, we propose a novel framework through the evolution of individual models in a large GNN architecture searching space. Instead of simply optimizing the model structures, an alternating evolution process is performed between GNN model structures and hyperparameters to dynamically approach the optimal fit of each other. Experiments and validations demonstrate that evolutionary NAS is capable of matching existing state-of-the-art reinforcement learning methods for both transductive and inductive graph representation learning and node classification.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.