Abstract

Tool wear and faults will affect the quality of machined workpiece and damage the continuity of manufacturing. The accurate prediction of remaining useful life (RUL) is significant to guarantee the processing quality and improve the productivity of automatic system. At present, the most commonly used methods for tool RUL prediction are trained by history fault data. However, when researching on new types of tools or processing high value parts, fault datasets are difficult to acquire, which leads to RUL prediction a challenge under limited fault data. To overcome the shortcomings of above prediction methods, a deep transfer reinforcement learning (DTRL) network based on long short-term memory (LSTM) network is presented in this paper. Local features are extracted from consecutive sensor data to track the tool states, and the trained network size can be dynamically adjusted by controlling time sequence length. Then in DTRL network, LSTM network is employed to construct the value function approximation for smoothly processing temporal information and mining long-term dependencies. On this basis, a novel strategy of Q-function update and transfer is presented to transfer the deep reinforcement learning (DRL) network trained by historical fault data to a new tool for RUL prediction. Finally, tool wear experiments are performed to validate effectiveness of the DTRL model. The prediction results demonstrate that the proposed method has high accuracy and generalization for similar tools and cutting conditions.

Highlights

  • Tool is a key part in manufacturing process, including turning, milling, cutting and so on

  • The result demonstrates that deep transfer reinforcement learning (DTRL) network is an effective feature prediction model for tool wear monitoring

  • Experimental results demonstrate the effectiveness of DTRL network for tool wear monitoring and remaining useful life (RUL) prediction

Read more

Summary

Introduction

Tool is a key part in manufacturing process, including turning, milling, cutting and so on. In the data-driven methods, machine and deep learning approaches are used to process observation data for diagnosis and prognosis [7]. Different from above-mentioned approaches, deep reinforcement learning can directly map raw extracted features to corresponding tool wear state, which is helpful to further improve intelligence of prediction methods. To overcome the deficiencies of limited data and further improve accuracy and intelligence of prediction methods, a deep reinforcement transfer learning (DTRL) method is researched in this paper. This novel approach first extract local features from consecutive time series data to reduce the network size.

Theoretical foundation
Proposed DTRL architecture
Parameter reinforcement Q learning
Deep Q transfer learning
DTRL for RUL prediction
RUL Prediction: Input
Benchmarking data description
Data preprocessing and DRL training
Tool Prediction Results
Model Comparison and Validation
Model Comparison
Model Validation
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call