Wavelet scattering is a recent time-frequency transform that shares the convolutional architecture with convolutional neural networks, but it allows for a faster training and it often requires smaller training sets. It consists of a multistage non-linear transform that allows us to compute the deep spectrum of a signal by cascading convolution, non-linear operator and pooling at each stage, resulting a powerful tool for signal classification when embedded in machine learning architectures. One of the most delicate parameters in convolutional architectures is the temporal sampling that strongly affects the computational load as well as the classification rate. In this paper the role of sampling in the wavelet scattering transform is studied for signal classification purposes. In particular, the role of subdivision schemes in properly compensating the information lost when using sampling at each stage of the transform is investigated. Preliminary experimental results show that, starting from coarse grids, interpolatory subdivision schemes reproduce copies of the original scattering coefficients at a fixed full grid that still represent distinctive features for signal classes. In fact, thanks to the ability of the scheme in reproducing similar fractal properties of the transform through an efficient iterative refinement procedure, the reproduced coefficients enable to obtain classification rates similar to those provided by the native wavelet scattering transform. The relationships between the tension parameter of the scheme and the fractal dimension of its limit curve are also investigated.
Read full abstract