Abstract

Convolutional neural network (CNN) has gained increasing attention in fault classification. However, the performance of CNN is sensitive to its learning rate. Some previous works have been done to tune the learning rate, including the “trial and error” and manual search, which heavily depend on the experts' experiences and should be conducted repeatedly on every dataset. Because of the variety of the fault data, it is time-consuming and labor intensive to use these traditional tuning methods for fault classification. To overcome this problem, in this article, we develop a novel learning rate scheduler based on the reinforcement learning (RL) for convolutional neural network (RL-CNN) in fault classification, which can schedule the learning rate efficiently and automatically. First, a new RL agent is designed to learn the policies about the learning rate adjustment during the training process. Second, a new structure of RL-CNN is developed to balance the exploration and exploitation of the agent. Third, the bagging ensemble version of RL-CNN (RL-CNN-Ens) is presented. Three bearing datasets are used to test the performance of RL-CNN-Ens. The results show that RL-CNN-Ens outperforms the traditional DLs and machine learning methods. Meanwhile, RL-CNN-Ens can find the state-of-the-art learning rate schedulers as human designed, showing its potential in fault classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call