Abstract

Graph Neural architecture search (GNAS) has shown great success in designing many prominent models on non-Euclidean data. However, the existing GNAS methods need to search from scratch on new tasks, which is time-consuming and inefficient in real application scenarios. In this paper, we propose a meta-reinforcement learning method for Graph Neural Architecture Search (Meta-GNAS) to improve the learning efficiency on new tasks by leveraging the knowledge learned from previous tasks. As far as we know, it is the first work that applies meta-learning to GNAS tasks. Moreover, to further improve the efficiency in tackling a new task, we use a predictive model to evaluate the accuracy of sampled graph neural architecture, instead of training it from scratch. The experiment results demonstrate that the architecture designed by Meta-GNAS outperforms the state-of-art manually designed architectures, and the search speed is faster than other search methods, with an average search time of fewer than 210 GPU seconds on 6 datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call