Complementary label learning (CLL) is a type of weakly supervised learning method that utilizes the category of samples that do not belong to a certain class to learn their true category. However, current CLL methods mainly rely on rewriting classification losses without fully leveraging the supervisory information in complementary labels. Therefore, enhancing the supervised information in complementary labels is a promising approach to improve the performance of CLL. In this paper, we propose a novel framework called Complementary Label Enhancement based on Knowledge Distillation (KDCL) to address the lack of attention given to complementary labels. KDCL consists of two deep neural networks: a teacher model and a student model. The teacher model focuses on softening complementary labels to enrich the supervision information in them, while the student model learns from the complementary labels that have been softened by the teacher model. Both the teacher and student models are trained on the dataset that contains only complementary labels. To evaluate the effectiveness of KDCL, we conducted experiments on four datasets, namely MNIST, F-MNIST, K-MNIST and CIFAR-10, using two sets of teacher-student models (Lenet-5+MLP and DenseNet-121+ResNet-18) and three CLL algorithms (PC, FWD and SCL-NL). Our experimental results demonstrate that models optimized by KDCL outperform those trained only with complementary labels in terms of accuracy.