Abstract
One-dimensional time series classification has always been an important research direction, playing an irreplaceable role in various fields. With the rapid development of deep learning, most traditional time series classification methods have gradually been replaced by neural network-based classification methods. At present, in the field of time series classification, models that perform well are mostly based on Convolutional Neural Networks (CNNs), while models with the Transformer architecture, which have shown outstanding performance in natural language processing and computer vision, do not stand out. In order to change this situation, We have investigated the feasibility of training models based on a single Transformer architecture directly on small datasets without additional data processing. Furthermore, we propose a new model called CTCTime, which combines the Transformer architecture with CNNs to address one-dimensional time series classification problems. We compared CTCTime with 13 traditional algorithms on 44 datasets from the UCR archive and with 7 advanced methods on 85 datasets. The UCR datasets are classic datasets for one-dimensional time series classification tasks. The experimental results demonstrate its feasibility, accuracy, and scalability.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.