Abstract

Ten years ago, linear models were applied in various domains. Before application of the algorithms, several current studies extracted features presumed to represent parochial markings from the data using engineering techniques. Recently the deep learning domain offered opportunities to directly feed data into the model without any extensive hand-crafted feature engineering techniques. In this paper, the proposed framework does the feature extraction in a non-supervised (i.e. self-supervised) manner using both Contextual Long Short-Term Memory (CLSTM) and Contextual Convolutional Neural Networks (CCNN). We can then concatenate data obtained from the CLSTM and CCNN blocks, feed it into the Attention block, pass it through the Multilayer Perceptron (MLP) block, ultimately passing it through a terminal layer for classification. The task involved here was non-trivial as there is a major challenge in implementing our model to solve the time series classification (TSC) problem: overfitting. We deal with this challenge as follows; firstly, we adjusted the number of neurons in each of the stages. Secondly we introduced dropouts after every layer in each stage of this model. Finally experiments regarding the University of California Riverside (UCR) dataset indicates the model’s superiority.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.