Abstract

Developing precise deep learning (DL) models for predicting tool wear is challenging, particularly due to the scarcity of experimental data. To address this issue, this paper introduces an innovative approach that leverages the capabilities of tabular generative adversarial networks (TGAN) and conditional single image GAN (ConSinGAN). These models are employed to generate synthetic data, thereby enriching the dataset and enhancing the robustness of the predictive models. The efficacy of this methodology was rigorously evaluated using publicly available milling datasets. The pre-processing of acoustic emission data involved the application of the Walsh-Hadamard transform, followed by the generation of spectrograms. These spectrograms were then used to extract statistical attributes, forming a comprehensive feature vector for model input. Three DL models—encoder-decoder long short-term memory (ED-LSTM), gated recurrent unit (GRU), and convolutional neural network (CNN)—were applied to assess their tool wear prediction capabilities. The application of 10-fold cross-validation across these models yielded exceptionally low RMSE and MAE values of 0.02 and 0.16, respectively, underscoring the effectiveness of this approach. The results not only highlight the potential of TGAN and ConSinGAN in mitigating data scarcity but also demonstrate significant improvements in the accuracy of tool wear predictions, paving the way for more reliable and precise predictive maintenance in manufacturing processes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.