Structured temporal representation in time series classification with ROCKETs and hyperdimensional computing

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Abstract Time series classification poses significant challenges due to the inherent temporal order of the data points and the existence of sequential dependencies between them. The ROCKET family, featuring methods like MiniROCKET, MultiROCKET, and HYDRA, is currently a leading approach in this domain, leveraging convolution kernels to aggregate temporal features into encodings for linear classifiers. However, these models encode temporal features over short temporal windows and then aggregate them as an unordered set of encodings over the longer temporal window of the entire data sequence. This prevents these models from capturing any longer sequence structure. To address this design drawback, we propose integrating hyperdimensional computing into ROCKET methods to explicitly incorporate temporal order of the short-term features within the entire time series. This approach enhances the discriminative power of encodings generated by MiniROCKET, MultiROCKET, and HYDRA where longer-term structure exists in the data, leading to increased classification performance with minimal computational overhead. More specifically, we introduce a method to represent time series as high-dimensional vectors through multiplicative binding of ROCKET encodings with encodings representing temporal order, applying this approach across various ROCKET methods. Additionally, we explore different high-dimensional vector representations of temporal order, yielding diverse similarity kernels that enhance classification accuracy. Through experiments on synthetic datasets, we highlight the limitations of ROCKET methods in handling temporal dependencies and show how the methods based on hyperdimensional computing overcome these limitations. Furthermore, our extensive experimental evaluation with real-world datasets included in the recent UCR archive, validates the advantages of our approach, consistently achieving classification improvements across all ROCKET methods that integrate hyperdimensional computing. Notably, our best model achieves a relative error rate reduction of over 50% compared to the best ROCKET model on several UCR datasets.

Similar Papers
  • Research Article
  • Cite Count Icon 3
  • 10.1145/3503541
Store-n-Learn: Classification and Clustering with Hyperdimensional Computing across Flash Hierarchy
  • May 31, 2022
  • ACM Transactions on Embedded Computing Systems
  • Saransh Gupta + 10 more

Processing large amounts of data, especially in learning algorithms, poses a challenge for current embedded computing systems. Hyperdimensional (HD) computing (HDC) is a brain-inspired computing paradigm that works with high-dimensional vectors called hypervectors . HDC replaces several complex learning computations with bitwise and simpler arithmetic operations at the expense of an increased amount of data due to mapping the data into high-dimensional space. These hypervectors, more often than not, cannot be stored in memory, resulting in long data transfers from storage. In this article, we propose Store-n-Learn, an in-storage computing solution that performs HDC classification and clustering by implementing encoding, training, retraining, and inference across the flash hierarchy. To hide the latency of training and enable efficient computation, we introduce the concept of batching in HDC. We also present on-chip acceleration for HDC encoding in flash planes. This enables us to exploit the high parallelism provided by the flash hierarchy and encode multiple data points in parallel in both batched and non-batched fashion. Store-n-Learn also implements a single top-level FPGA accelerator with novel implementations for HDC classification training, retraining, inference, and clustering on the encoded data. Our evaluation over 10 popular datasets shows that Store-n-Learn is on average 222× (543×) faster than CPU and 10.6× (7.3×) faster than the state-of-the-art in-storage computing solution, INSIDER for HDC classification (clustering).

  • Conference Article
  • Cite Count Icon 17
  • 10.1109/ijcnn55064.2022.9892158
HDC-MiniROCKET: Explicit Time Encoding in Time Series Classification with Hyperdimensional Computing
  • Jul 18, 2022
  • Kenny Schlegel + 2 more

Classification of time series data is an important task for many application domains. One of the best existing methods for this task, in terms of accuracy and computation time, is MiniROCKET. In this work, we extend this approach to provide better global temporal encodings using hyperdimensional computing (HDC) mechanisms. HDC (also known as Vector Symbolic Architectures, VSA) is a general method to explicitly represent and process information in high-dimensional vectors. It has previously been used successfully in combination with deep neural networks and other signal processing algorithms. We argue that the internal high-dimensional representation of MiniROCKET is well suited to be complemented by the algebra of HDC. This leads to a more general formulation, HDC-MiniROCKET, where the original algorithm is only a special case. We will discuss and demonstrate that HDC-MiniROCKET can systematically overcome catastrophic failures of MiniROCKET on simple synthetic datasets. These results are confirmed by experiments on the 128 datasets from the UCR time series classification benchmark. The extension with HDC can achieve considerably better results on datasets with high temporal dependence at about the same computational effort for inference.

  • Conference Article
  • Cite Count Icon 10
  • 10.1109/codes-isss55005.2022.00017
Brain-Inspired Hyperdimensional Computing for Ultra-Efficient Edge AI
  • Oct 1, 2022
  • Hussam Amrouch + 9 more

Hyperdimensional Computing (HDC) is rapidly emerging as an attractive alternative to traditional deep learning algorithms. Despite the profound success of Deep Neural Networks (DNNs) in many domains, the amount of computational power and storage that they demand during training makes deploying them in edge devices very challenging if not infeasible. This, in turn, inevitably necessitates streaming the data from the edge to the cloud which raises serious concerns when it comes to availability, scalability, security, and privacy. Further, the nature of data that edge devices often receive from sensors is inherently noisy. However, DNN algorithms are very sensitive to noise, which makes accomplishing the required learning tasks with high accuracy immensely difficult. In this paper, we aim at providing a comprehensive overview of the latest advances in HDC. HDC aims at realizing real-time performance and robustness through using strategies that more closely model the human brain. HDC is, in fact, motivated by the observation that the human brain operates on high-dimensional data representations. In HDC, objects are thereby encoded with high-dimensional vectors which have thousands of elements. In this paper, we will discuss the promising robustness of HDC algorithms against noise along with the ability to learn from little data. Further, we will present the outstanding synergy between HDC and beyond von Neumann architectures and how HDC opens doors for efficient learning at the edge due to the ultra-lightweight implementation that it needs, contrary to traditional DNNs.

  • Research Article
  • Cite Count Icon 779
  • 10.1007/s12559-009-9009-8
Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors
  • Jan 28, 2009
  • Cognitive Computation
  • Pentti Kanerva

The 1990s saw the emergence of cognitive models that depend on very high dimensionality and randomness. They include Holographic Reduced Representations, Spatter Code, Semantic Vectors, Latent Semantic Analysis, Context-Dependent Thinning, and Vector-Symbolic Architecture. They represent things in high-dimensional vectors that are manipulated by operations that produce new high-dimensional vectors in the style of traditional computing, in what is called here hyperdimensional computing on account of the very high dimensionality. The paper presents the main ideas behind these models, written as a tutorial essay in hopes of making the ideas accessible and even provocative. A sketch of how we have arrived at these models, with references and pointers to further reading, is given at the end. The thesis of the paper is that hyperdimensional representation has much to offer to students of cognitive science, theoretical neuroscience, computer science and engineering, and mathematics.

  • Conference Article
  • Cite Count Icon 6
  • 10.1109/hsi55341.2022.9869459
Evaluating the Adversarial Robustness of Text Classifiers in Hyperdimensional Computing
  • Jul 28, 2022
  • Harsha Moraliyage + 3 more

Hyperdimensional (HD) Computing leverages random high dimensional vectors (>10000 dimensions) known as hypervectors for data representation. This high dimensional feature representation is inherently redundant which results in increased robustness against noise and it also enables the use of a computationally simple operations for all vector functions. These two properties of hypervectors have led to energy efficient and fast learning capabilities in numerous Artificial Intelligence (AI) applications. Despite the increasing number of such AI HD applications, their susceptibility to adversarial attacks has not been explored, specifically in the text domain. To the best of our knowledge, this is the first research endeavour to evaluate the adversarial robustness of HD text classifiers and report on their vulnerability to such attacks. In this paper, we designed and developed n-grams based HD computing text classifiers for two primary applications of HD computing; language recognition and text classification, and then performed a set of character level and word level grey-box adversarial attacks, where an attacker’s goal is to mislead the target HD computing classifier to produce false prediction labels while keeping added perturbation noise as low as possible. Our results show that adversarial examples generated by the attacks can mislead the HD computing classifiers to produce incorrect prediction labels. However, HD computing classifiers show a higher degree of adversarial robustness in language recognition compared to text classification tasks. The robustness of HD computing classifiers against character-level attacks is significantly higher compared to word-level attacks and has the highest accuracy compared to deep learning-based classifiers. Finally, we evaluate the effectiveness of adversarial training as a possible defense strategy against adversarial attacks in HD computing text classifiers.

  • Conference Article
  • Cite Count Icon 13
  • 10.1145/3400302.3415723
THRIFTY
  • Nov 2, 2020
  • Saransh Gupta + 7 more

Hyperdimensional computing (HDC) is a brain-inspired computing paradigm that works with high-dimensional vectors, hypervectors, instead of numbers. HDC replaces several complex learning computations with bitwise and simpler arithmetic operations, resulting in a faster and more energy-efficient learning algorithm. However, it comes at the cost of an increased amount of data to process due to mapping the data into high-dimensional space. While some datasets may nearly fit in the memory, the resulting hypervectors more often than not can't be stored in memory, resulting in long data transfers from storage. In this paper, we propose THRIFTY, an in-storage computing (ISC) solution that performs HDC encoding and training across the flash hierarchy. To hide the latency of training and enable efficient computation, we introduce the concept of batching in HDC. It allows us to split HDC training into sub-components and process them independently. We also present, for the first time, on-chip acceleration for HDC which uses simple low-power digital circuits to implement HDC encoding in Flash planes. This enables us to explore high internal parallelism provided by the flash hierarchy and encode multiple data points in parallel with negligible latency overhead. THRIFTY also implements a single top-level FPGA accelerator, which further processes the data obtained from the chips. We exploit the state-of-the-art INSIDER ISC infrastructure to implement the top-level accelerator and provide software support to THRIFTY. THRIFTY runs HDC training completely in storage while almost entirely hiding the latency of computation. Our evaluation over five popular classification datasets shows that THRIFTY is on average 1612× faster than a CPU-server and 14.4× faster than the state-of-the-art ISC solution, INSIDER for HDC encoding and training.

  • Research Article
  • Cite Count Icon 27
  • 10.1016/j.asoc.2022.109494
Time series classification based on temporal features
  • Aug 13, 2022
  • Applied Soft Computing
  • Cun Ji + 5 more

Time series classification based on temporal features

  • Conference Article
  • Cite Count Icon 24
  • 10.1109/islped52811.2021.9502498
MIMHD: Accurate and Efficient Hyperdimensional Inference Using Multi-Bit In-Memory Computing
  • Jul 26, 2021
  • Arman Kazemi + 5 more

Hyperdimensional Computing (HDC) is an emerging computational framework that mimics important brain functions by operating over high-dimensional vectors, called hypervectors (HVs). In-memory computing implementations of HDC are desirable since they can significantly reduce data transfer overheads. All existing in-memory HDC platforms consider binary HVs where each dimension is represented with a single bit. However, utilizing multi-bit HVs allows HDC to achieve acceptable accuracies in lower dimensions which in turn leads to higher energy efficiencies. Thus, we propose a highly accurate and efficient multi-bit in-memory HDC inference platform called MIMHD. MIMHD supports multi-bit operations using ferroelectric field-effect transistor (FeFET) crossbar arrays for multiply-and-add and FeFET multi-bit content-addressable memories for associative search. We also introduce a novel hardware-aware retraining framework (HWART) that trains the HDC model to learn to work with MIMHD. For six popular datasets and 4000 dimension HVs, MIMHD using 3-bit (2-bit) precision HVs achieves (i) average accuracies of 92.6% (88.9%) which is 8.5% (4.8%) higher than binary implementations; (ii) 84.1× (78.6×) energy improvement over a GPU, and (iii) 38.4×(34.3×) speedup over a GPU, respectively. The 3-bit MIMHD is 4.3× and 13× faster and more energy-efficient than binary HDC accelerators while achieving similar accuracies.

  • Research Article
  • Cite Count Icon 1
  • 10.1007/s10462-025-11181-2
Classification using hyperdimensional computing: a review with comparative analysis
  • Mar 17, 2025
  • Artificial Intelligence Review
  • Pere Vergés + 5 more

Hyperdimensional computing (HD), also known as vector symbolic architectures (VSA), is an emerging and promising paradigm for cognitive computing. At its core, HD/VSA is characterized by its distinctive approach to compositionally representing information using high-dimensional randomized vectors. The recent surge in research within this field gains momentum from its computational efficiency stemming from low-resolution representations and ability to excel in few-shot learning scenarios. Nonetheless, the current literature is missing a comprehensive comparative analysis of various methods since each of them uses a different benchmark to evaluate its performance. This gap obstructs the monitoring of the field’s state-of-the-art advancements and acts as a significant barrier to its overall progress. To address this gap, this review not only offers a conceptual overview of the latest literature but also introduces a comprehensive comparative study of HD/VSA classification methods. The exploration starts with an overview of the strategies proposed to encode information as high-dimensional vectors. These vectors serve as integral components in the construction of classification models. Furthermore, we evaluate diverse classification methods as proposed in the existing literature. This evaluation encompasses techniques such as retraining and regenerative training to augment the model’s performance. To conclude our study, we present a comprehensive empirical study. This study serves as an in-depth analysis, systematically comparing various HD/VSA classification methods using two benchmarks, the first being a set of seven popular datasets used in HD/VSA and the second consisting of 121 datasets being the subset from the UCI Machine Learning repository. To facilitate future research on classification with HD/VSA, we open-sourced the benchmarking and the implementations of the methods we review. Since the considered data are tabular, encodings based on key-value pairs emerge as optimal choices, boasting superior accuracy while maintaining high efficiency. Secondly, iterative adaptive methods demonstrate remarkable efficacy, potentially complemented by a regenerative strategy, depending on the specific problem. Furthermore, we show how HD/VSA is able to generalize while training with a limited number of training instances. Lastly, we demonstrate the robustness of HD/VSA methods by subjecting the model memory to a large number of bit-flips. The results illustrate that the model’s performance remains reasonably stable until the occurrence of 40% of bit flips, where the model’s performance is drastically degraded. Overall, this study performed a thorough performance evaluation on different methods and, on the one hand, a positive trend was observed in terms of improving classification performance but, on the other hand, these developments could often be surpassed by off-the-shelf methods. This calls for better integration with the broader machine learning literature; the developed benchmarking framework provides practical means for doing so.

  • Research Article
  • Cite Count Icon 2
  • 10.1145/3524071
Brain-inspired Cognition in Next-generation Racetrack Memories
  • Nov 30, 2022
  • ACM Transactions on Embedded Computing Systems
  • Asif Ali Khan + 5 more

Hyperdimensional computing (HDC) is an emerging computational framework inspired by the brain that operates on vectors with thousands of dimensions to emulate cognition. Unlike conventional computational frameworks that operate on numbers, HDC, like the brain, uses high-dimensional random vectors and is capable of one-shot learning. HDC is based on a well-defined set of arithmetic operations and is highly error resilient. The core operations of HDC manipulate HD vectors in bulk bit-wise fashion, offering many opportunities to leverage parallelism. Unfortunately, on conventional von Neumann architectures, the continuous movement of HD vectors among the processor and the memory can make the cognition task prohibitively slow and energy intensive. Hardware accelerators only marginally improve related metrics. In contrast, even partial implementations of an HDC framework inside memory can provide considerable performance/energy gains as demonstrated in prior work using memristors. This article presents an architecture based on racetrack memory (RTM) to conduct and accelerate the entire HDC framework within memory. The proposed solution requires minimal additional CMOS circuitry by leveraging a read operation across multiple domains in RTMs called transverse read (TR) to realize exclusive-or ( XOR ) and addition operations. To minimize the CMOS circuitry overhead, an RTM nanowire-based counting mechanism is proposed. Using language recognition as the example workload, the proposed RTM HDC system reduces the energy consumption by 8.6× compared to the state-of-the-art in-memory implementation. Compared to dedicated hardware design realized with an FPGA, RTM-based HDC processing demonstrates 7.8× and 5.3× improvements in the overall runtime and energy consumption, respectively.

  • Research Article
  • Cite Count Icon 1
  • 10.1145/3724129
Federated Hyperdimensional Computing: Comprehensive Analysis and Robust Communication
  • May 29, 2025
  • ACM Transactions on Internet of Things
  • Ye Tian + 4 more

Federated learning is a distributed learning method by training the model in locally multiple clients, which has been used in numerous fields. Current convolutional neural networks (CNN)-based federated learning approaches face challenges from computational cost, communication efficiency, and robust communication. Recently, Hyper Dimensional Computing (HDC) has been recognized as a promising technique to address these challenges. HDC encodes data as high-dimensional vectors and enables lightweight training and communication through simple parallel vector operations. Several HDC-based federated learning methods have been proposed. Although existing methods reduce computational efficiency and communication cost, they are difficult to handle complex learning tasks and are not robust to unreliable wireless channels. In this work, we innovatively introduce a synergetic federated learning framework, FHDnn. With advantage of the complementary strengths of CNN and HDC, FHDnn can achieve optimal performance on complex image tasks while maintaining good computational and communication efficiency. Secondly, we demonstrate in detail the convergence of using HDC in a generalized federated learning framework, providing theoretical guarantees for HDC-based federated learning approach. Finally, we design three communication strategies to further improve the communication efficiency of FHDnn by 32×. Experiments demonstrate that FHDnn converges 3× faster than CNN-based federated learning methods, reduces the communication cost by 2,112×, and the local computation and energy consumption by 192×. In addition, it has good robustness to unreliable communication with bit errors, noise, and packet loss.

  • Research Article
  • Cite Count Icon 79
  • 10.1109/tcyb.2018.2789422
Multiobjective Learning in the Model Space for Time Series Classification.
  • Jan 22, 2018
  • IEEE Transactions on Cybernetics
  • Zhichen Gong + 3 more

A well-defined distance is critical for the performance of time series classification. Existing distance measurements can be categorized into two branches. One is to utilize handmade features for calculating distance, e.g., dynamic time warping, which is limited to exploiting the dynamic information of time series. The other methods make use of the dynamic information by approximating the time series with a generative model, e.g., Fisher kernel. However, previous distance measurements for time series seldom exploit the label information, which is helpful for classification by distance metric learning. In order to attain the benefits of the dynamic information of time series and the label information simultaneously, this paper proposes a multiobjective learning algorithm for both time series approximation and classification, termed multiobjective model-metric (MOMM) learning. In MOMM, a recurrent network is exploited as the temporal filter, based on which, a generative model is learned for each time series as a representation of that series. The models span a non-Euclidean space, where the label information is utilized to learn the distance metric. The distance between time series is then calculated as the model distance weighted by the learned metric. The network size is also optimized to learn parsimonious representations. MOMM simultaneously optimizes the data representation, the time series model separation, and the network size. The experiments show that MOMM achieves not only superior overall performance on uni/multivariate time series classification but also promising time series prediction performance.

  • Research Article
  • 10.1609/aaai.v39i21.34442
Bridging the Gap Between Hyperdimensional Computing and Kernel Methods via the Nyström Method
  • Apr 11, 2025
  • Proceedings of the AAAI Conference on Artificial Intelligence
  • Quanling Zhao + 4 more

Hyperdimensional computing (HDC) is an approach from the cognitive science literature for solving information processing tasks using data represented as high-dimensional random vectors. The technique has a rigorous mathematical backing, and is easy to implement in energy-efficient and highly parallel hardware like FPGAs and "processing-in-memory" architectures. The effectiveness of HDC in machine learning largely depends on how raw data is mapped to high-dimensional space. In this work, we propose NysHD, a new method for constructing this mapping that is based on the Nyström method from the literature on kernel approximation. Our approach provides a simple recipe to turn any user-defined positive-semidefinite similarity function into an equivalent mapping in HDC. There is a vast literature on the design of such functions for learning problems. Our approach provides a mechanism to import them into the HDC setting, expanding the types of problems that can be tackled using HDC. Empirical evaluation against existing HDC encoding methods shows that NysHD can achieve, on average, 11% and 17% better classification accuracy on graph and string datasets respectively.

  • Conference Article
  • Cite Count Icon 53
  • 10.1109/hpca51647.2021.00028
Revisiting HyperDimensional Learning for FPGA and Low-Power Architectures
  • Feb 1, 2021
  • Mohsen Imani + 7 more

Today's applications are using machine learning algorithms to analyze the data collected from a swarm of devices on the Internet of Things (IoT). However, most existing learning algorithms are overcomplex to enable real-time learning on IoT devices with limited resources and computing power. Recently, Hyperdimensional computing (HDC) is introduced as an alternative computing paradigm for enabling efficient and robust learning. HDC emulates the cognitive task by representing the values as patterns of neural activity in high-dimensional space. HDC first encodes all data points to high-dimensional vectors. It then efficiently performs the learning task using a well-defined set of operations. Existing HDC solutions have two main issues that hinder their deployments on low-power embedded devices: (i) the encoding module is costly, dominating 80% of the entire training performance, (ii) the HDC model size and the computation cost grow significantly with the number of classes in online inference.In this paper, we proposed a novel architecture, LookHD, which enables real-time HDC learning on low-power edge devices. LookHD exploits computation reuse to memorize the encoding module and simplify its computation with single memory access. LookHD also address the inference scalability by exploiting HDC governing mathematics that compresses the HDC trained model into a single hypervector. We present how the proposed architecture can be implemented on the existing low power architectures: ARM processor and FPGA design. We evaluate the efficiency of the proposed approach on a wide range of practical classification problems such as activity recognition, face recognition, and speech recognition. Our evaluations show that LookHD can achieve, on average, $ 28.3\times$ faster and $ 97.4\times$ more energy-efficient training as compared to the state-of-the-art HDC implemented on the FPGA. Similarly, in the inference, LookHD is $ 2.2\times$ faster, $ 4.1\times$ more energy-efficient, and has $ 6.3\times$ smaller model size than the same state-of-the-art algorithms.

  • Conference Article
  • Cite Count Icon 22
  • 10.1109/iv48863.2021.9576028
Multivariate Time Series Analysis for Driving Style Classification using Neural Networks and Hyperdimensional Computing
  • Jul 11, 2021
  • Kenny Schlegel + 3 more

In this paper, we present a novel approach for driving style classification based on time series data. Instead of automatically learning the embedding vector for temporal representation of the input data with Recurrent Neural Networks, we propose a combination of Hyperdimensional Computing (HDC) for data representation in high-dimensional vectors and much simpler feed-forward neural networks. This approach provides three key advantages: first, instead of having a “black box” of Recurrent Neural Networks learning the temporal representation of the data, our approach allows to encode this temporal structure in high-dimensional vectors in a human-comprehensible way using the algebraic operations of HDC while only relying on feed-forward neural networks for the classification task. Second, we show that this combination is able to achieve at least similar and even slightly superior classification accuracy compared to state-of-the-art Long Short-Term Memory (LSTM)-based networks while significantly reducing training time and the necessary amount of data for successful learning. Third, our HDC-based data representation as well as the feed-forward neural network, allow implementation in the substrate of Spiking Neural Networks (SNNs). SNNs show promise to be orders of magnitude more energy-efficient than their rate-based counterparts while maintaining comparable prediction accuracy when being deployed on dedicated neuromorphic computing hardware, which could be an energy-efficient addition in future intelligent vehicles with tight restrictions regarding on-board computing and energy resources. We present a thorough analysis of our approach on a publicly available data set including a comparison with state-of-the-art reference models.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon