Abstract

As the core of the Sparseland, dictionary learning has represented excellent performances in many fields, such as pattern recognition, fault diagnosis, noise reduction, image recognition and so on. Its key idea is that the data can have a good sparse representation on a specific dictionary consisting of a few basis atoms, so it demands that this specific dictionary is accurate and suitable enough to make the data sparse. Learning a good dictionary requires sufficient and comprehensive training data, while an efficient algorithm of dictionary learning is also essential. However, in many application fields, especially for the fault diagnosis, the training data is often scarce due to the cost of experimentation and time or other reasons. Thus, it’s not guaranteed that the data can have a good sparse representation on the single learned dictionary. To solve this problem, we proposed a novel dictionary learning named deep and shared dictionary learning (DSDL), which has the deep structure from deep learning and shared structure. In DSDL, the data is decomposed into several dictionary layers, where the deeper dictionary layer is learned from a few atoms of the previous layer. On the other hand, the shared structure aims to learn the common features from different classes and remove them for highlighting the class-specific features. We apply DSDL in two experimental cases of fault diagnosis under time-varying condition, and the results show that our proposed method always has better performances than other six state-of-the-art sparse representation methods. Compared to two popular deep learning methods, namely convolutional neural network (CNN) and deep belief network (DBN), DSDL is more accurate with small training sample.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call