Abstract

Bearing is a machine element used in rotating parts. Fault occurrences in the bearings can lead to both material and physical losses, so it is crucial to detect errors beforehand. In the literature, convolutional neural network (CNN) is widely used to detect and classify the fault in bearings. 1D convolutional neural networks present satisfactory performance at extracting features from raw vibrational signals. However, CNNs need a large number of samples for training. The fault data can be scarce and generating them can require significant work. Transfer learning is the transfer of knowledge between different domains that are similar but not the same. Thanks to the transferred knowledge, a smaller number of samples can be sufficient for training. In this study, the performance of transfer learning in CNNs is investigated. The study cases with and without transfer learning are compared. To determine the number of layers to be fine-tuned after the transfer, different numbers of layers are frozen and optimal block numbers are determined for the data sets. As a result of the experiments, it has been observed that transfer learning and fine-tuning the optimal block number helps to improve performance. While no convergence is observed in the loss and accuracy values when transfer learning is not employed, the convergence is achieved and the number of incorrectly labeled samples is greatly reduced when transfer learning is utilized.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call