Knowledge distillation (KD) techniques aim to transfer knowledge from complex teacher neural networks to simpler student networks. In this study, we propose a novel knowledge distillation method called Multiloss Joint Gradient Control Knowledge Distillation (MJKD), which functions by effectively combining feature- and logit-based knowledge distillation methods with gradient control. The proposed knowledge distillation method discretely considers the gradients of the task loss (cross-entropy loss), feature distillation loss, and logit distillation loss. The experimental results suggest that logits may contain more information and should, consequently, be assigned greater weight during the gradient update process in this work. The empirical findings on the CIFAR-100 and Tiny-ImageNet datasets indicate that MJKD generally outperforms traditional knowledge distillation methods, significantly enhancing the generalization ability and classification accuracy of student networks. For instance, MJKD achieves a 63.53% accuracy on Tiny-ImageNet for the ResNet18 MobileNetV2 pair. Furthermore, we present visualizations and analyses to explore its potential working mechanisms.