Abstract

Time series classification (TSC) has been an ongoing machine learning problem with countless proposed algorithms spanning a multitude of fields. Whole series, intervals, shapelet, dictionary-based, and model-based are all different past approaches to solving TSC. Then there’s deep learning approaches that try to utilize all the success demonstrated by neural network’s (NN) architecture in image classification to TSC. Deep learning typically requires vast amounts of training data and computational power to have meaningful results. But, what if there was a network inspired not by a biological brain, but that of mathematics proven in theory? Or better yet, what if that network was not as computationally expensive as deep learning networks, which have billions of parameters and need a surplus of training data? This desired network is exactly what the Shepard Interpolation Neural Networks (SINN) provide - a shallow learning approach with minimal training samples needed and a foundation on a statistical interpolation technique to achieve great results. These networks learn metric features which can be more mathematically explained and understood. In this paper, we leverage the novel SINN architecture on a popular benchmark TSC data set achieving state-of-the-art accuracy on several of its test sets while being competitive against the other established algorithms. We also demonstrate that even when there is a lack of training data, the SINN outperforms other deep learning algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call