Articles published on Real Time Series Data
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
159 Search results
Sort by Recency
- New
- Research Article
- 10.1080/00986445.2025.2611401
- Jan 2, 2026
- Chemical Engineering Communications
- Shivangi Sharma + 2 more
This study presents a comprehensive framework integrating experimental, computational, and intelligent control strategies for the sustainable synthesis of methyl acetate in a solar assisted reactive distillation column. A pilot scale RD system, thermally powered by an evacuated tube collector, was experimentally evaluated under varying operational parameters to achieve a high product purity (>99%) with a reboiler heat input of 1.5 kW and operating at a reflux ratio of 3. To gain spatial insights into internal column dynamics, a three-dimensional computational fluid dynamics model was developed using actual geometric and process conditions. The CFD results validated experimental observations, accurately capturing component distribution and temperature gradients across column zones, and confirmed the effective operation of the reactive zone with minimal carryover. To ensure operational robustness, a variational autoencoder based anomaly detection model was trained on real time temperature time series data, successfully identifying process deviations such as heat fluctuations and tray flooding. Upon anomaly detection, a dynamic control strategy using PID regulation was implemented, demonstrating rapid stabilization of the system within 6 min, enhancing both conversion and safety. Dynamic simulation further validated the absence of unwanted reactions in nonreactive zones and confirmed effective kinetic behavior under controlled conditions. This integrated approach demonstrates the viability of solar driven RD systems as a sustainable alternative for continuous esterification, while offering a scalable methodology for real time monitoring, fault diagnosis, and intelligent process control.
- Research Article
- 10.1007/s00406-025-02173-y
- Dec 23, 2025
- European archives of psychiatry and clinical neuroscience
- Tianzheng Zhong + 3 more
Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by heterogeneities in behavioral symptoms and gray matter (GM) volume changes across the brain. However, progressive and causal GM volume changes and their associations with behavioral symptoms remain unclear. Our study aimed to explore the heterogeneity of causal GM volume changes in individuals with ASD across behavioral spectrums. Brain structural imaging and clinical data of 131 autistic individuals and 246 neurotypical individuals (NTs) were included, and were clustered into neuroanatomical subtypes. A novel behavioral-causal structural covariance network (BCaSCN) analysis approach was developed. GM volume maps from each ASD subtype were sequenced according to the social responsiveness scale (SRS) value and values from each behavioral domain of SRS, generating pseudo-time series data with disease progression. We performed BCaSCN analysis on the pseudo-time series data to explore the causal relationship of the GM volume changes across behavior spectrums among ASD subtypes. Two neurosubtypes of ASD with distinct GM volume alterations were observed. CaSCN analysis revealed heterogeneity in causal GM volume alterations between the two ASD neurostypes. Furthermore, BCaSCN analysis across behavior spectrums demonstrated that subtype 1 exhibited higher overall out- and in-degree GC values in the cognition domain, whereas subtype 2 displayed higher overall out- and in-degree GC values in the domain of motivation, mannerism and communication. CaSCN and BCaSCN applied pseudo time-series data rather than real time-series data, longitudinal data are needed to validate the results of this study in the future. Therefore, the results should be interpreted with caution. These findings suggest that ASD subtypes are associated with heterogeneous causal GM volume changes, which may be related to distinct behavioral domains.
- Research Article
- 10.1145/3785661
- Dec 19, 2025
- ACM Computing Surveys
- Feriel Fass + 3 more
In this paper, we present a comprehensive survey of emergency call volume prediction methods, along with a comparative experimental study of various models. We first outline the methods and their use cases, highlighting the key features leveraged in each state-of-the-art approach. Using real time series data on emergency calls, supplemented with meteorological, demographic, and event-related variables, we evaluate the existing models at two granularities: yearly and daily. In addition to applying the original methods, as they are proposed in the state of the art, we perform the variable selection through techniques like Lasso, Correlation Coefficients (CC), Recursive Feature Elimination (RFE), and Random Forest Feature Importance (RFFI). We then compare time series based models, regression models, neural networks, and non-parametric approaches. Performance is evaluated using metrics including Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Root Mean Squared Error (RMSE), Residual Standard Deviation (RSD), and the Coefficient of Determination (R²). The results show that Random Forest and feature-selection–based Lasso achieve the highest accuracy for predicting the total call volume for each hour of the day throughout the year. For daily call volume, time series–based methods perform best when using weather conditions and temporal variables selected by the RFFI method.
- Research Article
- 10.1037/pas0001408
- Oct 1, 2025
- Psychological assessment
- Aidan G C Wright + 2 more
Ambulatory assessment is popular in research settings for its ability to assess real-world functioning. It is useful for estimating an individual's typical level of a behavior (individual mean), how (un)stable that behavior is (individual standard deviation), how behaviors associate with others or specific contexts (within-person correlation), and shifts in those statistics that might signal an important change in functioning (e.g., early warning signal). However, many of the methodological advances have not made the jump from the lab to clinical practice. Effective use of ambulatory assessment in applied settings to understand functioning and guide potential interventions requires development and application of psychometric standards for N = 1 assessments. We conducted a simulation study to determine how many assessments are necessary to achieve sufficiently reliable (i.e., precise and stable) estimates of an individual's mean and standard deviation on a single variable as well as the correlation between two variables. To ensure the ecological validity of the simulation conditions, we used real time series data from a large sample that included psychiatric patients and nonpatients (capturing realistic levels of autocorrelation and skewness). We found that the minimum number of assessments depends on the statistic of interest and the temporal characteristics of the variable of interest. Individual means can be estimated reliably with a reasonably small number of observations under most conditions, but adequately precise and stable individual correlations require more assessments than may be achievable in many applied settings. Implications of these results for the potential of applied ambulatory assessment in clinical practice are discussed. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
- Research Article
- 10.1098/rsif.2025.0154
- Jul 1, 2025
- Journal of the Royal Society, Interface
- Taiju Yukihira + 2 more
Ecological interactions in natural communities are often highly nonlinear; that is, interaction strengths can fluctuate temporally depending on community states. An effective and reliable tool to infer state-dependent interactions from empirical data is crucial to ecological studies. Here, we propose a novel non-parametric inference method based on Gaussian process regression to quantify interaction strengths from nonlinear time series data. We introduce the method by extending the Gaussian process empirical dynamic modelling (GP-EDM) approach in ecology. To confirm its applicability, we investigated the performance of the proposed method, using both synthetic and real-time series data. The results highlight that the proposed method possesses several distinct features. First, throughout performance comparison with existing methods (S-map and regularized S-map), the proposed method achieves higher inference accuracy for noisy time series data. Second, the proposed method analytically accounts for the dependence of interaction strengths on community states. This enables us to locally evaluate state-dependent changes in interaction strengths by exploring hypothetical community states. Moreover, because the posterior function is derived analytically, the proposed method can easily evaluate the inference uncertainty (e.g. credible interval), resulting in more reliable inference outcomes. The proposed method provides a basis for addressing state dependence in analyses of species interactions.
- Research Article
- 10.1080/02664763.2025.2501173
- May 7, 2025
- Journal of Applied Statistics
- Rolando Rubilar-Torrealba + 2 more
In this research, we develop a Bayesian approach to estimate the parameters defining the fractional Poisson process in the context of extreme values and with a limited sample size, utilizing the approximate Bayesian computation (ABC) technique. Furthermore, a test was developed, wherein the null hypothesis was set as the presence of long memory in the fractional Poisson process. A simulation study was conducted to demonstrate the enhanced efficacy for parameter estimation in small samples in comparison to the asymptotic approach. Additionally, the study presented an application to real financial time series data, with the aim of characterizing the phenomenon of extreme data. This resulted in the delivery of a new approach to characterize large losses in stock markets and other phenomena related to the fractional Poisson process.
- Research Article
- 10.1609/aaai.v39i18.34164
- Apr 11, 2025
- Proceedings of the AAAI Conference on Artificial Intelligence
- Reza Nematirad + 2 more
Time series forecasting is an important application in various domains such as energy management, traffic planning, financial markets, meteorology, and medicine. However, real-time series data often present intricate temporal variability and sharp fluctuations, which pose significant challenges for time series forecasting. Previous models that rely on 1D time series representations usually struggle with complex temporal variations. To address the limitations of 1D time series, this study introduces the Times2D method that transforms the 1D time series into 2D space. Times2D consists of three main parts: first, a Periodic Decomposition Block (PDB) that captures temporal variations within a period and between the same periods by converting the time series into a 2D tensor in the frequency domain. Second, the First and Second Derivative Heatmaps (FSDH) capture sharp changes and turning points, respectively. Finally, an Aggregation Forecasting Block (AFB) integrates the output tensors from PDB and FSDH for accurate forecasting. This 2D transformation enables the utilization of 2D convolutional operations to effectively capture long and short characteristics of the time series. Comprehensive experimental results across large-scale data in the literature demonstrate that the proposed Times2D model achieves state-of-the-art performance in both short-term and long-term forecasting.
- Research Article
- 10.35629/0743-1104111125
- Apr 1, 2025
- Journal of Research in Applied Mathematics
- Ding Peng + 2 more
This paper aims to further discuss the segmentation and estimation issues of autoregressive (AR) models. The innovations in the AR model do not follow the default normal distribution but rather an arbitrary distribution or a mixture of several distributions. We use a mixture of normal distributions to fit the distribution of the innovations, and employ the Dirichlet process as the prior for the variance of the mixture normal distribution to obtain the posterior estimates of the parameters. By combining Gibbs sampling, we continuously update the parameter estimates of the model to achieve more accurate results.In this study, segmentation is performed along both the horizontal and vertical axes. For the horizontal axis, we use the Bayesian method to identify the threshold values and lag parameters. For the vertical axis, we employ the LASSO method to identify the locations of change points. Numerical simulations are conducted to demonstrate the feasibility of the two segmentation approaches. Moreover, the model estimation and analysis are carried out on real time series data
- Research Article
- 10.3390/electronics14071253
- Mar 22, 2025
- Electronics
- Rokas Štrimaitis + 2 more
The continuous forecasting of anticipated trends in company accounting helps to prepare for possible challenges and investment possibilities. The forecasting of performance indicators for each partner and metric in a company requires a high volume of resources, as each forecasting model requires some supervised adjustment. To solve the challenge of manual work in this task, AutoML solutions can be used. In this study, we propose an automated forecasting model selection option. The time series data are summarized by 21 statistical features and different experiments are performed to find the best-suited forecasting method. Different classifier models are tested with the dataset, estimating the impact of data sampling and synthetic data generation. The results indicate that the undersampling approach is more suitable, as it helps in balancing the classes. Random forest methods usually show the best performance, achieving about 74% accuracy. The usage of a synthetic data-based dataset for model training reduced the accuracy by almost 20%, while the integration of synthetic and real time series data allowed us to achieve balance between both classes. This confirms that synthetic time series data generation might increase the accuracy of forecasting method selection, but it should be used in combination with real data.
- Research Article
- 10.15406/ijh.2025.09.00398
- Jan 1, 2025
- International Journal of Hydrology
- Sonu Kumar + 7 more
In the northern part of India, the 900-km long Gomati River drains the central Ganga Alluvial Plain and transports its water and sediments to the Ganga River, one of the world’s largest fluvial system. The basin experiences a humid, sub-tropical climate characterised with the monsoon season of heavy precipitation. In the distal part of the river basin, hydrological data were collected from Maighat gauging station located at Chandwak for the present stage-discharge relationship study. With increasing fresh water demands from the ever growing human population in the Ganga Alluvial Plain, the Gomati River Basin has been facing acute crises of water resources and environmental degradation. Thus, the primary aim of the present study is (1) to analyse the stage-discharge relationship of the Gomati River, to detect its seasonal characteristics and (2) to elucidate the reliable stage-discharge rating curve for water resource management and environmental conservation. Findings of the present study indicated that the stage-discharge relationship displayed better correlation coefficient during the summer (R2=0.9881, N = 23) and the post-monsoon (R2=0.9166, N = 18) seasons than the winter (R2=0.8907, N = 27) and the monsoon (R2=0.8925, N = 36) seasons. The seasonal stage-discharge rating curves are discussed with particular reference to predict the accurate discharge variability for the low-gradient single-channel alluvial river. Based on detailed analysis and goodness-of-fit criteria, the linear stage-discharge rating curve was found to be the best for the Gomati River, and demonstrates good predictive accuracy (R2=0.9712, n = 66) with discharge condition of <250 m3/s. The application of results has a great importance due to practicality in water resource management in the densely populated and the highly agricultured alluvial plain of the Gomati River Basin. This is for the first time that the present hydrological rating curve study, based on the real time-series data, has been conducted so far. It is necessary to advance our understanding of the stage-discharge relationship in future studies under the climate change scenario.
- Research Article
3
- 10.1111/2041-210x.14462
- Nov 22, 2024
- Methods in Ecology and Evolution
- Cristóbal Gallegos + 3 more
Abstract Environmental change can drive evolutionary adaptation and determine geographic patterns of biodiversity. Yet at a time of rapid environmental change, our ability to predict its evolutionary impacts is incomplete. Temporal environmental change, in particular, involves a combination of major components such as abrupt shift, trend, cyclic change, and noise. Theoretical predictions exist for adaptation to isolated components, but knowledge gaps remain regarding their joint impacts. We extend classic evolutionary theory to develop a model for the evolution of environmental tolerance by the evolution of an underlying developmentally plastic trait, in response to major components of temporal change. We retrieve and synthesise earlier predictions of responses to isolated components, and generate new predictions for components changing simultaneously. Notably, we show how different forms of environmental predictability emerging from the interplay of cyclic change, stochastic change (noise) and lag between development and selection shape predictions. We then illustrate the utility of our model for generating testable predictions for the evolution of adaptation and plasticity when parameterised with real time series data. Specifically, we parameterise our model with daily time series of sea‐surface temperature from a global marine hotspot in southern Australia, and use model simulations to predict the evolution of thermal tolerance, and geographic differences in tolerance, in this region. By synthesising theory on evolutionary adaptation to temporal environmental change, and providing new insights into the joint effects of its different components, our framework, embedded in a Shiny app, offer a path to better predictions of biological responses to climate change.
- Research Article
- 10.1088/1742-6596/2901/1/012009
- Nov 1, 2024
- Journal of Physics: Conference Series
- Jiaqi Wu + 2 more
Abstract Monitoring wax deposition in oil wells becomes increasingly difficult in the complex and extended downhole working conditions, where direct measurement is often impractical. Over time, wax buildup intensifies, affecting oil production and equipment functionality. The degree of wax deposition in oil wells is difficult to measure. This paper proposes a predictive strategy for rod pumping well operations. It integrates the Crested Porcupine Optimizer (CPO) optimization algorithm with Variational Mode Decomposition (VMD) decomposition, selecting suitable K and α values for the electrical power of oil wells. The Rime improves Long Short-Term Memory (RIME-LSTM) network forecasts each mode component, and the final prediction is obtained by summing these values. The method’s performance was validated using real power time series data from four wells over a specific period. The coefficient of determination (R²) and mean absolute percentage error (MAPE) evaluated the models, confirming effectiveness. Finally, a rating strategy for wax deposition predicts when to perform shutdown cleaning before severe accumulation occurs, thus protecting the oil pump and preventing low production rates. This article studies the prediction of downhole technology and wax deposition degree in oil wells, and proves its effectiveness through experiments.
- Research Article
3
- 10.1093/gji/ggae381
- Oct 23, 2024
- Geophysical Journal International
- Kunpu Ji + 3 more
SUMMARY The improved SSA (ISSA) method is widely recognized for directly extracting signals from gappy time-series without requiring prior interpolation. However, it is rather time consuming, particularly for long time-series with large window sizes, such as Global Navigation Satellite System (GNSS) position time-series. This study proposes an efficient ISSA method that yields equivalent results to the ISSA method while significantly reducing computation time. Both methods aim to minimize the quadratic norm of principal components, while our method has fewer unknown parameters in the principal component computation than those of the ISSA method. We evaluate the performance of the proposed method using real GNSS position time-series from 27 permanent stations located in mainland China. Results show that the proposed method can effectively reduce computation time than the ISSA method and the improvement depends on the chosen window size, the time-series length and the percentage of missing data. This efficient approach can be naturally extended to principal component analysis (PCA) and multichannel SSA (MSSA) for processing multiple incomplete time-series, improving computational efficiencies compared to the modified PCA and the improved MSSA while maintaining unchanged results. We also compare the ISSA method with the modified SSA (SSAM) and the iterative SSA methods using both real and synthetic time-series data. Results indicate that the ISSA method outperforms the SSAM method, and when conducted iteratively, also surpasses the iterative SSA method.
- Research Article
- 10.58578/amjsai.v1i1.3755
- Jul 31, 2024
- African Multidisciplinary Journal of Sciences and Artificial Intelligence
- Emwinloghosa Kenneth Guobadia + 1 more
In this paper, we examine, if the effect of transformation leads to improvement of model performance in time series modeling. The class of transformations that was considered is the Box-Cox family of transformation on the k-th weighted moving average (k-th WMA) model and autoregressive integrated moving average (ARIMA) model from a given nonstationary economic realization time series data. A real nonstationary economic time series data was used to demonstrate this procedure. The nonstationary time series data can be transformed to stationary data using the process of differencing alongside with Box-Cox transformation. The ARIMA model is fitted to the transformed data using the techniques of Box-Jenkins, where the best ARIMA is selected among the competing ARIMA models using Akaike information corrected criterian (AICc) while the best k-th WMA is selected among the competing models using some eval_uation metrics such as root mean square error (RMSE) and mean absolute error (MAE). Finally, the optimal model is selected between ARIMA model and k-th WMA using the RMSE and MAE. Our findings are that the transformed k-th WMA models outperformed the classical ARIMA models for the set of Box-Cox transformation parameters considered for the data used.
- Research Article
2
- 10.1007/s41060-024-00599-6
- Jul 9, 2024
- International Journal of Data Science and Analytics
- Fouad Bahrpeyma + 3 more
Abstract The utilization of machine learning has become ubiquitous in addressing contemporary challenges in data science. Moreover, there has been significant interest in democratizing the decision-making process for selecting machine learning algorithms, achieved through the incorporation of meta-features and automated machine learning techniques for both classification and regression tasks. However, this paradigm has not been readily applied to multistep-ahead time series prediction problems. Unlike regression and classification problems, which utilize independent variables not derived from the target variable, time series models typically rely on past values of the series to forecast future outcomes. The structure of a time series is often characterized by features such as trend, seasonality, cyclicality and irregularity. In our study, we illustrate how time series metrics representing these features, in conjunction with an ensemble-based regression Meta-Learner, were employed to predict the standardized mean square error of candidate time series prediction models. Our experiments utilized datasets covering a broad feature space, facilitating the selection of the most effective model by researchers. A rigorous evaluation was conducted to assess the performance of the Meta-Learner on both synthetic and real time series data.
- Research Article
5
- 10.3390/en17133220
- Jun 30, 2024
- Energies
- Kazimierz Kawa + 4 more
The management of large enterprises influences their efficiency and profitability. One of the important aspects is the appropriate management of electricity consumption used for production and daily operation. The problem becomes more complicated when you need to manage not one but a large complex of buildings with heterogeneous purposes. In the paper, we examine real-time series data of electrical energy consumption in a complex of heterogeneous buildings, including offices and warehouses, using time series analysis methods such as the Holt–Winters model and ARIMA/SARIMA model, and neural networks (Deep Neural Network, Recurrent Neural Network, and Long Short-Term Memory). Experimental research was performed on a dataset obtained from an energy consumption meter placed in the building complex, built in different periods, and equipped with a variety of automation devices. The data were collected over a period of four years 2018–2021 in the form of time series. Results show that classic models are good at predicting energy consumption in the mentioned type of buildings. The ARIMA model gave the best results—for buildings characterized by seasonality and trends the forecasts were almost perfect with actual values.
- Research Article
7
- 10.1016/j.epsr.2024.110723
- Jun 29, 2024
- Electric Power Systems Research
- Siyang Li + 2 more
DiffPLF: A conditional diffusion model for probabilistic forecasting of EV charging load
- Research Article
- 10.15588/1607-6761-2024-1-5
- Jun 26, 2024
- Electrical Engineering and Power Engineering
- V.V Sidanchenko
Purpose. To develop and investigate a method for controlling the blast furnace process in the absence of an analytical description of its behavior, which allows for the detection of anomalies in the production process. Methodology. This study utilizes methods for evaluating and forecasting time series based on the Kalman filter algorithm, fractal analysis, and nonlinear dynamics. Findings. A method has been developed that allows for the evaluation and forecasting of non-stationary stochastic processes with an unknown analytical model. This method includes an embedded anomaly detection procedure based on the 3-sigma method. Research was conducted on real-time series data of the chemical composition of cast iron at the blast furnace output. It has been demonstrated that the developed method effectively detects anomalies in the process behavior. A possible implementation of a control system using the proposed method has been considered. Originality. For the first time, a modification of the suboptimal Kalman-type filter-forecaster has been developed, invariant to the properties of the processed process. This modification improves the reliability and accuracy of forecasting non-stationary processes when it is impossible to obtain their analytical model and detects production process anomalies using an embedded procedure based on the 3-sigma method. For the first time, a study of time series data on the chemical composition of cast iron at the blast furnace output using the 3-sigma method has been conducted, which allowed for the identification of zones with anomalous values and outliers, indicating significant deviations in the production process behavior. Practical value. This research expands existing quality control methods in the metallurgical industry and demonstrates the effectiveness of the statistical 3-sigma method for monitoring and analyzing time series in real production conditions. The obtained results can be used to develop more accurate quality control systems and take prompt corrective actions.
- Research Article
2
- 10.3233/mas-231458
- Jun 11, 2024
- Model Assisted Statistics and Applications
- M R Irshad + 3 more
In their article, Erbayram and Akdoğan (Ricerche di Matematica, 2023) introduced the Poisson-transmuted record type exponential distribution by combining the Poisson and transmuted record type exponential distributions. This article presents a novel approach to modeling time series data using integer-valued time series with binomial thinning framework and the Poisson-transmuted record type exponential distribution as the innovation distribution. This model demonstrates remarkable proficiency in accurately representing over-dispersed integer-valued time series. Under this configuration, which is a flexible and highly dependable choice, the model accurately captures the underlying patterns present in the time series data. A comprehensive analysis of the statistical characteristics of the process is given. The conditional maximum likelihood and conditional least squares methods are employed to estimate the process parameters. The performance of the estimates is meticulously evaluated through extensive simulation studies. Finally, the proposed model is validated using real-time series data and compared against existing models to demonstrate its practical effectiveness.
- Research Article
6
- 10.1088/1361-6579/ad4954
- May 1, 2024
- Physiological Measurement
- Kshama Kodthalu Shivashankara + 5 more
Objective. Cardiovascular diseases are a major cause of mortality globally, and electrocardiograms (ECGs) are crucial for diagnosing them. Traditionally, ECGs are stored in printed formats. However, these printouts, even when scanned, are incompatible with advanced ECG diagnosis software that require time-series data. Digitizing ECG images is vital for training machine learning models in ECG diagnosis, leveraging the extensive global archives collected over decades. Deep learning models for image processing are promising in this regard, although the lack of clinical ECG archives with reference time-series data is challenging. Data augmentation techniques using realistic generative data models provide a solution. Approach. We introduce ECG-Image-Kit, an open-source toolbox for generating synthetic multi-lead ECG images with realistic artifacts from time-series data, aimed at automating the conversion of scanned ECG images to ECG data points. The tool synthesizes ECG images from real time-series data, applying distortions like text artifacts, wrinkles, and creases on a standard ECG paper background. Main results. As a case study, we used ECG-Image-Kit to create a dataset of 21 801 ECG images from the PhysioNet QT database. We developed and trained a combination of a traditional computer vision and deep neural network model on this dataset to convert synthetic images into time-series data for evaluation. We assessed digitization quality by calculating the signal-to-noise ratio and compared clinical parameters like QRS width, RR, and QT intervals recovered from this pipeline, with the ground truth extracted from ECG time-series. The results show that this deep learning pipeline accurately digitizes paper ECGs, maintaining clinical parameters, and highlights a generative approach to digitization. Significance. The toolbox has broad applications, including model development for ECG image digitization and classification. The toolbox currently supports data augmentation for the 2024 PhysioNet Challenge, focusing on digitizing and classifying paper ECG images.