Abstract

Sparsity measures that can quantify the sparsity of signals are often used as objective functions of signal processing and machine learning algorithms (e.g., sparse filtering, compressive sensing, blind deconvolution, and the fast Kurtogram, etc.). Classic sparsity measures include kurtosis, Gini index, negative entropy, the ratio of Lp norm to Lq norm, smoothness index, L1 norm, L0 norm, etc. To enrich the library of sparsity measures, this paper aims to generalize a classic Gini index to construct generalized Gini indices (GGIs). Firstly, inspired by the ratio of different quasi-arithmetic means, investigations on the GGIs showed that the GGIs satisfy all six properties of good sparsity measures. The GGIs only have one parameter a⩾0. When a=1, the GGIs are reduced to the classic Gini index. Secondly, investigations reveal that: (1) the GGIs can monotonically quantify the sparsity changes of Bernoulli coefficients. Sparsity quantification curves given by the GGIs are complementary to those generated by our recently proposed Box-Cox sparsity measures (BCSMs). Especially, when the unique parameters of the GGIs and the BCSMs are equal to zero, these sparsity quantification curves of the GGIs and BCSMs are reduced to that of negative entropy; (2) the GGIs are convergent with an increase of signal lengths; (3) when the squared envelope of white Gaussian noise is quantified by the GGIs, theoretical values of the GGIs are derived as a/(a+1), which can be used as baselines for machine abnormality detection. At last, two bearing run-to-failure datasets are used to validate the effectiveness of the GGIs for machine condition monitoring and their results showed that the GGIs are effective in detecting incipient faults and indicating informative frequency bands. In future, the proposed GGIs can be used in any algorithms that need sparsity measures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call