Crossmamba: multivariate time series forecasting model for cross-temporal and cross-dimensional dependencies with Mamba

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Crossmamba: multivariate time series forecasting model for cross-temporal and cross-dimensional dependencies with Mamba

Similar Papers
  • Research Article
  • Cite Count Icon 5
  • 10.1016/j.bdr.2023.100377
Spatiotemporal Prediction Based on Feature Classification for Multivariate Floating-Point Time Series Lossy Compression
  • May 1, 2023
  • Big Data Research
  • Huimin Feng + 3 more

Spatiotemporal Prediction Based on Feature Classification for Multivariate Floating-Point Time Series Lossy Compression

  • Research Article
  • Cite Count Icon 54
  • 10.1162/neco.2007.19.7.1962
Outliers Detection in Multivariate Time Series by Independent Component Analysis
  • Jul 1, 2007
  • Neural Computation
  • Roberto Baragona + 1 more

In multivariate time series, outlying data may be often observed that do not fit the common pattern. Occurrences of outliers are unpredictable events that may severely distort the analysis of the multivariate time series. For instance, model building, seasonality assessment, and forecasting may be seriously affected by undetected outliers. The structure dependence of the multivariate time series gives rise to the well-known smearing and masking phenomena that prevent using most outliers' identification techniques. It may be noticed, however, that a convenient way for representing multiple outliers consists of superimposing a deterministic disturbance to a gaussian multivariate time series. Then outliers may be modeled as nongaussian time series components. Independent component analysis is a recently developed tool that is likely to be able to extract possible outlier patterns. In practice, independent component analysis may be used to analyze multivariate observable time series and separate regular and outlying unobservable components. In the factor models framework too, it is shown that independent component analysis is a useful tool for detection of outliers in multivariate time series. Some algorithms that perform independent component analysis are compared. It has been found that all algorithms are effective in detecting various types of outliers, such as patches, level shifts, and isolated outliers, even at the beginning or the end of the stretch of observations. Also, there is no appreciable difference in the ability of different algorithms to display the outlying observations pattern.

  • Book Chapter
  • 10.1007/978-981-10-6385-5_6
Research on Pattern Matching Method of Multivariate Hydrological Time Series
  • Jan 1, 2017
  • Zhen Gai + 3 more

The existing pattern matching methods of multivariate time series can hardly measure the similarity of multivariate hydrological time series accurately and efficiently. Considering the characteristics of multivariate hydrological time series, the continuity and global features of variables, we proposed a pattern matching method, PP-DTW, which is based on dynamic time warping. In this method, the multivariate time series is firstly segmented, and the average of each segment is used as the feature. Then, PCA is operated on the feature sequence. Finally, the weighted DTW distance is used as the measure of similarity in sequences. Carrying out experiments on the hydrological data of Chu River, we conclude that the pattern matching method can effectively describe the overall characteristics of the multivariate time series, which has a good matching effect on the multivariate hydrological time series.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 2
  • 10.11648/j.ajtas.20211001.18
Studying Changes on Stock Market Transactions Using Different Techniques for Multivariate Time Series
  • Jan 1, 2021
  • American Journal of Theoretical and Applied Statistics
  • Ahmed Mohamed Mohamed Elsayed

There are many studies dealt with univariate time series data, but the analysis of multivariate time series are rarely discussed. This article discusses the theoretical and numerical aspects of different techniques that analyze the multivariate time series data. These techniques are ANN, ARIMA, GLM and VARS models. All techniques are used to analyze the data that obtained from Egypt Stock Exchange Market. R program with many packages are used. These packages are the "neuralnet, nnet, forecast, MTS and vars". The process of measuring the accuracy of forecasting are investigated using the measures ME, ACF, MAE, MPE, RMSE, MASE, and MAPE. This is done for seasonal and non-seasonal time series data. Best ARIMA model with minimum error is constructed and tested. The lags order of the model are identified. Granger test for causality indicated that Exchange rate is useful for forecasting another time series. Also, the Instant test indicated that there is instantaneous causality between Exchange rate and other time series. For non-seasonal data, the NNAR() model is equivalent to ARIMA() model. Also, for seasonal data, the NNAR(p,P,0)[m] model is equivalent to an ARIMA(p,0,0)(P,0,0)[m] model. For these data, we concluded that the ANN and GLMs of fitting multivariate seasonal time series is better than multivariate non-seasonal time series. The transactions of Finance, Household and Chemicals sectors are significant for Exchange rate in non-seasonal time series case. The forecasts that based on stationary time series data are more smooth and accurate. VARS model is more accurate rather than VAR model for ARIMA (0,0,1). Forecasts of VAR values are predicted over short horizon, because the prediction over long horizon becomes unreliable or uniform.

  • Research Article
  • Cite Count Icon 183
  • 10.1198/016214504000001448
SLEX Analysis of Multivariate Nonstationary Time Series
  • Jun 1, 2005
  • Journal of the American Statistical Association
  • Hernando Ombao + 2 more

We develop a procedure for analyzing multivariate nonstationary time series using the SLEX library (smooth localized complex exponentials), which is a collection of bases, each basis consisting of waveforms that are orthogonal and time-localized versions of the Fourier complex exponentials. Under the SLEX framework, we build a family of multivariate models that can explicitly characterize the time-varying spectral and coherence properties. Every model has a spectral representation in terms of a unique SLEX basis. Before selecting a model, we first decompose the multivariate time series into nonstationary components with uncorrelated (nonredundant) spectral information. The best SLEX model is selected using the penalized log energy criterion, which we derive in this article to be the Kullback–Leibler distance between a model and the SLEX principal components of the multivariate time series. The model selection criterion takes into account all of the pairwise cross-correlation simultaneously in the multivariate time series. The proposed SLEX analysis gives results that are easy to interpret, because it is an automatic time-dependent generalization of the classical Fourier analysis of stationary time series. Moreover, the SLEX method uses computationally efficient algorithms and hence is able to provide a systematic framework for extracting spectral features from a massive dataset. We illustrate the SLEX analysis with an application to a multichannel brain wave dataset recorded during an epileptic seizure.

  • Book Chapter
  • 10.1007/978-3-319-19704-3_11
Advanced Spectral Methods and Their Potential in Forecasting Fuzzy-Valued and Multivariate Financial Time Series
  • Jan 1, 2015
  • Vasile Georgescu + 1 more

In this paper we explore the effectiveness of two nonparametric methods, based upon a matrix spectral decomposition approach, namely the Independent Component Analysis (ICA) and the Singular Spectrum Analysis (SSA). The intended area of applications is that of forecasting fuzzy-valued and multivariate time series. Given a multivariate time series, ICA assumes that each of its components is a mixture of several independent underlying factors. Separating such distinct time-varying causal factors becomes crucial in multivariate financial time series analysis, when attempting to explain past co-movements and to predict future evolutions. The multivariate extension of SSA (MSSA) can be employed as a powerful prediction tool, either separately, or in conjunction with ICA. As a first application, we use MSSA to recurrently forecasting triangular-shaped fuzzy monthly exchange rates, thus aiming at capturing both the randomness and the fuzziness of the financial process. A hybrid ICA-SSA approach is also proposed. The primarily role of ICA is to reveal certain fundamental factors behind several parallel series of foreign exchange rates. More accurate predictions can be performed via these independent components, after their separation. MSSA is employed to compute forecasts of independent factors. Afterwards, these forecasts of underlying factors are remixed into the forecasts of observable foreign exchange rates.KeywordsIndependent component analysisSingular spectrum analysisForecasting fuzzy-valuedMultivariate time series

  • Research Article
  • Cite Count Icon 6
  • 10.1186/s12874-024-02448-3
Effects of missing data imputation methods on univariate blood pressure time series data analysis and forecasting with ARIMA and LSTM
  • Dec 26, 2024
  • BMC Medical Research Methodology
  • Nicholas Niako + 3 more

BackgroundMissing observations within the univariate time series are common in real-life and cause analytical problems in the flow of the analysis. Imputation of missing values is an inevitable step in every incomplete univariate time series. Most of the existing studies focus on comparing the distributions of imputed data. There is a gap of knowledge on how different imputation methods for univariate time series affect the forecasting performance of time series models. We evaluated the prediction performance of autoregressive integrated moving average (ARIMA) and long short-term memory (LSTM) network models on imputed time series data using ten different imputation techniques.MethodsMissing values were generated under missing completely at random (MCAR) mechanism at 10%, 15%, 25%, and 35% rates of missingness using complete data of 24-h ambulatory diastolic blood pressure readings. The performance of the mean, Kalman filtering, linear, spline, and Stineman interpolations, exponentially weighted moving average (EWMA), simple moving average (SMA), k-nearest neighborhood (KNN), and last-observation-carried-forward (LOCF) imputation techniques on the time series structure and the prediction performance of the LSTM and ARIMA models were compared on imputed and original data.ResultsAll imputation techniques either increased or decreased the data autocorrelation and with this affected the forecasting performance of the ARIMA and LSTM algorithms. The best imputation technique did not guarantee better predictions obtained on the imputed data. The mean imputation, LOCF, KNN, Stineman, and cubic spline interpolations methods performed better for a small rate of missingness. Interpolation with EWMA and Kalman filtering yielded consistent performances across all scenarios of missingness. Disregarding the imputation methods, the LSTM resulted with a slightly better predictive accuracy among the best performing ARIMA and LSTM models; otherwise, the results varied. In our small sample, ARIMA tended to perform better on data with higher autocorrelation.ConclusionsWe recommend to the researchers that they consider Kalman smoothing techniques, interpolation techniques (linear, spline, and Stineman), moving average techniques (SMA and EWMA) for imputing univariate time series data as they perform well on both data distribution and forecasting with ARIMA and LSTM models. The LSTM slightly outperforms ARIMA models, however, for small samples, ARIMA is simpler and faster to execute.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 7
  • 10.1155/2019/6917658
Assessment of Local Dynamic Stability in Gait Based on Univariate and Multivariate Time Series.
  • Jul 25, 2019
  • Computational and Mathematical Methods in Medicine
  • Henryk Josiński + 5 more

The ability of the locomotor system to maintain continuous walking despite very small external or internal disturbances is called local dynamic stability (LDS). The importance of the LDS requires constantly working on different aspects of its assessment method which is based on the short-term largest Lyapunov exponent (LLE). A state space structure is a vital aspect of the LDS assessment because the algorithm of the LLE computation for experimental data requires a reconstruction of a state space trajectory. The gait kinematic data are usually one- or three-dimensional, which enables to construct a state space based on a uni- or multivariate time series. Furthermore, two variants of the short-term LLE are present in the literature which differ in length of a time span, over which the short-term LLE is computed. Both a state space structure and the consistency of the observations based on values of both short-term LLE variants were analyzed using time series representing the joint angles at ankle, knee, and hip joints. The short-term LLE was computed for individual joints in three state spaces constructed on the basis of either univariate or multivariate time series. Each state space revealed walkers' locally unstable behavior as well as its attenuation in the current stride. The corresponding conclusions made on the basis of both short-term LLE variants were consistent in ca. 59% of cases determined by a joint and a state space. Moreover, the authors present an algorithm for estimation of the embedding dimension in the case of a multivariate gait time series.

  • Research Article
  • 10.1016/j.neunet.2024.106800
RFNet: Multivariate long sequence time-series forecasting based on recurrent representation and feature enhancement
  • Oct 23, 2024
  • Neural Networks
  • Dandan Zhang + 3 more

RFNet: Multivariate long sequence time-series forecasting based on recurrent representation and feature enhancement

  • Research Article
  • Cite Count Icon 122
  • 10.1016/s0165-7836(96)00482-1
Modelling and forecasting monthly fisheries catches: comparison of regression, univariate and multivariate time series methods
  • Jan 1, 1997
  • Fisheries Research
  • K.I Stergiou + 2 more

Modelling and forecasting monthly fisheries catches: comparison of regression, univariate and multivariate time series methods

  • Book Chapter
  • 10.1007/978-3-030-70626-5_19
A Deep Hybrid Neural Network Forecasting for Multivariate Non-stationary Time Series
  • Jan 1, 2021
  • Xiaojian Yang + 1 more

In the field of financial time series prediction, multivariate time series is increasingly considered as the input of the prediction model and non-stationary time series have always been the most common data sets. However, the processing efficiency is low but the cost is high whenthe traditional methods is used for modeling non-stationary time series. This paper is aimed at providing the methodological guidance for building low-cost models for modeling multivariate non-stationary time series. By building a univariate CNN, a multivariate CNN, a nonpooling CNN (NPCNN), a CNN-LSTM and a NPCNN-LSTM, we conducted a series of comparative experiments. We found that multivariate non-stationary time series is not complex enough, the pooling operation will lose the useful information and the LSTM layer can weaken this negative effect. Meanwhile, convolutional layers and LSTM layers can improve the prediction accuracy. Adding the LSTM to prediction models can make models have better performance in short-term prediction.KeywordsCNNLSTMMultivariate non-stationary time seriesPooling layers

  • Research Article
  • Cite Count Icon 75
  • 10.1016/j.eswa.2007.11.057
Analysis and modeling of multivariate chaotic time series based on neural network
  • Dec 10, 2007
  • Expert Systems with Applications
  • M Han + 1 more

Analysis and modeling of multivariate chaotic time series based on neural network

  • Conference Article
  • Cite Count Icon 45
  • 10.1145/375663.375808
Time series similarity measures and time series indexing (abstract only)
  • May 1, 2001
  • Dimitrios Gunopulos + 1 more

Time series is the simplest form of temporal data. A time series is a sequence of real numbers collected regularly in time, where each number represents a value. Time series data come up in a variety of domains, including stock market analysis, environmental data, telecommunications data, medical data and financial data. Web data that count the number of clicks on given cites, or model the usage of different pages are also modeled as time series. Therefore time series account for a large fraction of the data stored in commercial databases. There is recently increasing recognition of this fact, and support for time series as a different data type in commercial data bases management systems is increasing. IBM DB2 for example implements support for time series using data-blades.The pervasiveness and importance of time series data has sparked a lot of research work on the topic. While the statistics literature on time series is vast, it has not studied methods that would be appropriate for the time series similarity and indexing problems we discuss here; much of the relevant work on these problems has been done by the computer science community.One interesting problem with time series data is finding whether different time series display similar behavior. More formally, the problem can be stated as: Given two time series X and Y, determine whether they are similar or not (in other words, define and compute a distance function dist(X, Y)). Typically each time series describes the evolution of an object, for example the price of a stock, or the levels of pollution as a function of time at a given data collection station. The objective can be to cluster the different objects to similar groups, or to classify an object based on a set of known object examples. The problem is hard because the similarity model should allow for imprecise matches. One interesting variation is the subsequence similarity problem, where given two time series X and Y, we have to determine those subsequences of X that are similar to pattern Y. To answer these problems, different notions of similarity between time series have been proposed in data mining research.In the tutorial we examine the different time series similarity models that have been proposed, in terms of efficiency and accuracy. The solutions encompass techniques from a wide variety of disciplines, such as databases, signal processing, speech recognition, pattern matching, combinatorics and statistics. We survey proposed similarity techniques, including the Lp norms, time warping, longest common subsequence measures, baselines, moving averaging, or deformable Markov model templates.Another problem that comes up in applications is the indexing problem: given a time series X, and a set of time series S = {Y1,…,YN}, find the time series in S that are most similar to the query X. A variation is the subsequence indexing problem, where given a set of sequences S, and a query sequence (pattern) X, find the sequences in S that contain subsequences that are similar to X. To solve these problems efficiently, appropriate indexing techniques have to be used. Typically, the similarity problem is related to the indexing problem: simple (and possibly inaccurate) similarity measures are usually easy to build indexes for, while more sophisticated similarity measures make the indexing problem hard and interesting.We examine the indexing techniques that can be used for different models, and the dimensionality reduction techniques that have been proposed to improve indexing performance. A time series of length n can be considered as a tuple in an n-dimensional space. Indexing this space directly is inefficient because of the very high dimensionality. The main idea to improve on it is to use a dimensionality reduction technique that takes the n item long time series, and maps it to a lower dimensional space with k dimensions (hopefully, k

  • Research Article
  • 10.6092/unina/fedoa/10270
CONTRIBUTIONS IN CLASSIFICATION:VISUAL PRUNING FOR DECISION TREES, P-SPLINE BASED CLUSTERING OF CORRELATED SERIES, BOOSTED-ORIENTED PROBABILISTIC CLUSTERING OF SERIES.
  • Mar 30, 2015
  • Carmela Iorio

CONTRIBUTIONS IN CLASSIFICATION:VISUAL PRUNING FOR DECISION TREES, P-SPLINE BASED CLUSTERING OF CORRELATED SERIES, BOOSTED-ORIENTED PROBABILISTIC CLUSTERING OF SERIES.

  • Research Article
  • Cite Count Icon 19
  • 10.1111/j.1467-9892.2012.00805.x
Editorial: Special issue on time series analysis in the biological sciences
  • Jun 6, 2012
  • Journal of Time Series Analysis
  • David S Stoffer + 1 more

Editorial: Special issue on time series analysis in the biological sciences

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon