The real-world datasets often exhibit imbalanced class distribution, which is a common challenge for multi-class classification algorithms. To settle the multi-class imbalanced classification problem of class imbalance learning, a novel reinforced knowledge distillation method is proposed in this paper. In the reinforced knowledge distillation, an improved fine-grained classification architecture based on knowledge distillation strategy and policy gradient reinforcement learning is proposed. In addition, reinforced knowledge distillation uses a newly designed reward signal and a novel sample weights update strategy to train the policies to find the optimal student-network, which makes reinforced knowledge distillation more powerful in handling the multi-class imbalanced classification problem. The effectiveness and practicability of the proposed reinforced knowledge distillation method are verified through its application to a simulated industrial process benchmark and extensive real-world datasets.