With the ever-increasing scale of graph-structured data, the computational resource requirements of large-scale graph neural networks (GNNs) can impede their deployment on devices with limited resources. Through knowledge distillation (KD), the expertise of an expansive, previously trained model (the teacher) can be transferred to a smaller architecture (the student) while maintaining comparable performance. However, existing KD approaches for GNNs often fix the teacher model during student learning. Without considering the students' learning feedback, there is a performance reduction after KD is achieved. With this objective, this research proposed an expertise transfer method via adaptive meta-learning for GNNs. The teacher can continuously update its parameters according to the student's optimal gradient direction in each KD step. Thus, the teacher learns to teach appropriate knowledge to the student. To maintain the structural features of each node and further avoid over-smoothing, we also introduced local structure preservation loss. Comprehensive experiments across four benchmarks demonstrate the effectiveness of our methodology.