Abstract
Vibration signals carry important information about the health state of a ball bearing and have proven their efficiency in training machine learning models for fault diagnosis. However, the sampling rate and frequency resolution of these acquired signals play a key role in the detection analysis. Industrial organizations often seek cost-effective and qualitative measurements, while reducing sensor resolution to optimize their resource allocation. This paper compares the performance of supervised learning classifiers for the fault detection of bearing faults in induction machines using vibration signals sampled at various frequencies. Three classes of algorithms are tested: linear models, tree-based models, and neural networks. These algorithms are trained and evaluated on vibration data collected experimentally and then downsampled to various intermediate levels of sampling, from 48 kHz to 1 kHz, using a fractional downsampling method. The study highlights the trade-off between fault detection accuracy and sampling frequency. It shows that, depending on the machine learning algorithm used, better training accuracies are not systematically achieved when training with vibration signals sampled at a relatively high frequency.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.