Although the deep CNN-based super-resolution methods have achieved outstanding performance, their memory cost and computational complexity severely limit their practical employment. Knowledge distillation (KD), which can efficiently transfer knowledge from a cumbersome network (teacher) to a compact network (student), has demonstrated its advantages in some computer vision applications. The representation of knowledge is vital for knowledge transferring and student learning, which is generally defined in hand-crafted manners or uses the intermediate features directly. In this paper, we propose a model-agnostic meta knowledge distillation method under the teacher–student architecture for the single image super-resolution task. It provides a more flexible and accurate way to help teachers transmit knowledge in accordance with the abilities of students via knowledge representation networks (KRNets) with learnable parameters. Specifically, the texture-aware dynamic kernels are generated from local information to decompose the distillation problem into texture-wise supervision for further promoting the recovery quality of high-frequency details. In addition, the KRNets are optimized in a meta-learning manner to ensure the knowledge transferring and the student learning are beneficial to improving the reconstructed quality of the student. Experiments conducted on various single image super-resolution datasets demonstrate that our proposed method outperforms existing defined knowledge representation-related distillation methods and can help super-resolution algorithms achieve better reconstruction quality without introducing any extra inference complexity.