Abstract

Recently, additive manufacturing (AM) has received increased attention due to its high energy consumption. By extracting hidden information or highly representative features from energy-relevant data, knowledge distillation (KD) reduces predictive model complexity and computational load. By using almost predetermined and fixed models, the distillation process restricts students and teachers from learning from one model to another. To reduce computational costs while maintaining acceptable performance, a teacher assistant (TA) was added to the teacher-student architecture. Firstly, a teacher ensemble was combined with three baseline models to enhance accuracy. In the second step, a teacher ensemble (TA) was formed to bridge the capacity gap between the ensemble and the simplified model. As a result, the complexity of the student model was reduced. Using geometry-based features derived from layer-wise image data, a KD-based predictive model was developed to evaluate the feasibility and effectiveness of two independently trained student models. In comparison with independently trained student models, the performance of the proposed method has the lowest RMSE, MAE, and training time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call