Published in last 50 years
Articles published on Random Series
- Research Article
- 10.1051/0004-6361/202554260
- Oct 1, 2025
- Astronomy & Astrophysics
- E Ziaali + 6 more
Context. Complex systems are characterised by many highly interconnected dynamical units that exhibit non-linear properties. The type of pulsating stars known as δ Sct stars have intrinsic brightness variations that require non-linear models to be described comprehensively. These stars span a broad range of properties, from low to high amplitudes, as well as a broad range of complex features. We applied the complex network approach to δ Sct stars as non-linear complex systems. Aims. Differences among the constructed networks, which might appear in the network metrics of low-amplitude and high-amplitude δ Sct stars, can indicate intrinsic asteroseismic differences that are essential for classification of pulsating stars. Additionally, the relations between the asteroseismic parameters and network metrics (such as degree or clustering distributions) can lead us to a better understanding of pulsating stars dynamics. Methods. By using the horizontal visibility algorithm, we mapped the TESS light curves of 69 δ Sct stars to undirected horizontal visibility graphs (HVGs), where the graph nodes represent the light curve points. This allowed us to measure the morphological characteristics of HVGs, such as the distribution of links between nodes (degree distribution) and the average fraction of triangles around each node (average clustering coefficient). Results. The average clustering coefficients for HADS and LADS display two different linear correlations with the peak-to-peak amplitude of the TESS light curves that naturally separates them into two groups. This novel approach enabled us to obtain this result without having to use an ad hoc criterion, for the first time. Exponential fits on HVG degree distributions for both HADS and LADS stars offer indices that suggest correlated stochastic generating processes for δ Sct light curves. By applying the theoretical expression for the HVGs degree distribution of random time series, we have the ability to distinguish significant pulsations from the background noise, which could become a practical tool in frequency analyses of stars going forward.
- Research Article
- 10.1112/blms.70202
- Sep 30, 2025
- Bulletin of the London Mathematical Society
- Yongjiang Duan + 2 more
Abstract This paper is concerned with random holomorphic functions over the infinite‐dimensional polydisc. The celebrated Billard's theorem states, among other things, that for a random holomorphic function in a single variable—obtained by placing a random sign before each Taylor coefficient—the boundedness of the function can be automatically improved to continuity. We extend Billard's theorem in three aspects: first, we extend it to the case of infinitely many variables. Second, we establish an inverse result, that is, not merely a sufficient condition. Such results are rare in the literature. In particular, we define Billard's property (BP) for random variables and prove: (1) all symmetric random variables have BP; (2) a square‐integrable random variable has BP if and only if it is centered. Third, another novelty of our results is that they encompass applications to random Dirichlet series.
- Research Article
- 10.1080/17538947.2025.2528651
- Jul 9, 2025
- International Journal of Digital Earth
- Yangcan Bao + 6 more
ABSTRACT Accurate classification of mangrove species is essential for sustainable conservation and precise carbon sink estimation. Integrating phenological information holds promise for enhancing species distinguishability and improving classification accuracy. However, further research is needed to clarify the roles of vegetation indices (VIs) and to identify the key phenological months for mangrove classification. In this study, a random forest algorithm (RF) and Sentinel-2 time series data were applied using the Google Earth Engine platform. Results showed that structural and physiological VIs significantly improved classification accuracy by 7–43% compared to the initial accuracy. Quantification of the variable importance using RF revealed that the structural and physiological VIs exhibited dynamic key month variations in spring and summer, but they maintained temporal stability during autumn and winter. Moreover, classification based on key phenological months reduced data redundancy and achieved higher accuracy than using all monthly VIs. The highest overall accuracy of the structural VIs achieved a stable accuracy (88 ± 4%), while the physiological VIs exhibited two-stage differentiation, with some achieving accuracies of greater than 91% and others with accuracies of less than 70%. The results of this study highlight the key phenological patterns and provide guidance for VI selection in future research.
- Research Article
- 10.1371/journal.pbio.3003267
- Jul 7, 2025
- PLOS Biology
- Ainsley Temudo + 3 more
Memory systems in humans are less segregated than initially thought as learning tasks from different memory domains (e.g., declarative versus procedural) can recruit similar brain areas. However, it remains unclear whether the functional role of these overlapping brain regions – and the hippocampus in particular – is domain-general. Here, we test the hypothesis that the hippocampus encodes and preserves the temporal order of sequential information irrespective of the nature of that information. We used multivariate pattern analyses (MVPA) of functional magnetic resonance imaging (fMRI) data acquired during the execution of learned sequences of movements and objects to assess brain patterns related to procedural and declarative memory processes, respectively. We also tested whether the hippocampus represents information about temporal order of items (here movements and objects in the motor and declarative domains, respectively) in a learned sequence irrespective of their nature. We also examined such coding in brain regions involved in both motor (primary and premotor cortices) and object (perirhinal cortex and parahippocampus) sequence learning. Our results suggest that hippocampal and perirhinal multivoxel activation patterns do not carry information about specific items or temporal position in a random series of objects or movements. Rather, these regions code for the representation of items in their learned temporal position in sequences irrespective of their nature (i.e., item-position coding). In contrast, although all other ROIs showed evidence of item-position coding, this representation could – at least partially – be attributed to the coding of other information such as position information. Altogether, our findings indicate that regions in the medial temporal lobe represent the temporal order of sequential information similarly in both the declarative and the motor memory domains. Our data suggest that these regions contribute to the development of item-position maps that might provide a cognitive framework for sequential behaviors irrespective of their nature.
- Research Article
- 10.1103/physrevresearch.7.023177
- May 22, 2025
- Physical Review Research
- Boyuan Shi
We introduce a family of (semi)-bold-line series, assisted with 1/Nf expansions, with Nf being the number of fermion flavors. If there is no additional Nf cut, the series reduces to the random phase approximation series in the density-density channel, complementary to the particle-hole and particle-particle channels introduced in . To address the very localized integrands in diagrammatic Monte Carlo, we introduced an innovative VEGAS-RG-MCMC sampling method, where we found a significant decrease of autocorrelation time without the usage of the state-of-the-art many-configuration MCMC (MCMCMC) method while the combination of both is also straightforward. We performed extensive benchmarks for density, energy, and pressure with the t−t′ SU(Nf) Hubbard model on a square lattice and honeycomb lattices over a wide range of numerical methods. For benchmark purposes, we also implement bare-U symmetry-broken perturbation series for the two-dimensional SU(2) Hubbard model on the honeycomb lattice, where we found encouraging results from weak to intermediate couplings. Published by the American Physical Society 2025
- Research Article
- 10.52783/jisem.v10i4.9669
- May 10, 2025
- Journal of Information Systems Engineering and Management
- Mustapha Amor
This study discusses the use of ARMA coefficient models. We know that for any random time series, we can write the ARMA(p,q) model detailed by Box and Jenkins. We can implement this using ARMA models, which have proven effective in many procedures, such as earthquakes and acceleration time series generated by random models Within the frequency time limits. provided comprehensive reviews of As a model of acceleration time series in both the frequency and time domains. Several research papers have addressed ARMA models. Box and Jenkins discussed ARMA models in detail .Here, we found the earthquake amplitude in terms of the ARMA(p,q) coefficients, but the ARMA(2,2) model can be considered the most appropriate in this case for the regions of northern Algeria, from which we extracted the Aïn Defla, Casablanca, and Affroun earthquakes. Taking into account the AIC RITERIA criterion, we found that it is in fact ARMA(2,2), which is what we studied. Using mathematical relationships, including some results such as the relationship used in calculating the seismic magnitude, we find that it agrees with the model.ARMA(2.2)
- Research Article
- 10.1063/5.0226652
- Apr 1, 2025
- Review of Scientific Instruments
- Jingyun Xue + 3 more
Addressing the issue of uncertainty in calculating the damage effectiveness of aerial maneuvering targets due to the random intersection with warhead fragment clusters formed by projectile proximity explosions, this paper puts forward a numerical calculation method of the aerial maneuvering target damage probability under the intersection between the warhead fragment group and the target with random dwell time sequence (RDTS) characteristics. Leveraging the random scattering characteristics of warhead fragment clusters and incorporating the vulnerability characteristics of the aerial maneuvering target, a model for calculating the probability of hitting and damaging an aerial maneuvering target with a warhead fragment cluster from a single projectile’s explosive is established. Factors related to the random dwell time of the target possibly falling into the effective damage area of the warhead fragment cluster from the projectile’s proximity explosion are introduced, resulting in the development of a model for calculating the damage probability of an aerial maneuvering target under RDTS. In addition, we provide a mathematical description of the statistical characteristics of the probability of damage. Through simulation and example calculation and analysis, the results demonstrate that this method can accurately reflect the actual damage effectiveness of warhead fragment clusters formed by projectile proximity explosion on the aerial maneuvering target in a random scattering state, offering a novel theoretical approach for calculating the damage effectiveness of highly maneuverable targets in the future.
- Research Article
1
- 10.3390/rs17060978
- Mar 11, 2025
- Remote Sensing
- Zelong Chi + 7 more
Effective monitoring and management of potato late blight (PLB) is essential for sustainable agriculture. This study describes a methodology to improve PLB identification on a large scale. The method combines unsupervised and supervised machine learning algorithms. To improve the monitoring accuracy of the PLB regression model, the study used the K-Means algorithm in conjunction with morphological operations to identify potato growth areas. Input data consisted of monthly NDVI from Sentinel-2 and VH bands from Sentinel-1 (covering the year 2021). The identification results were validated on 221 field survey samples with an F1 score of 0.95. To monitor disease severity, we compared seven machine learning models: CART decision trees (CART), Gradient Tree Boosting (GTB), Random Forest (RF), single optical data Random Forest Time series model (TS–RF), single radar data Random Forest Time series model (STS–RF), multi-source data Gradient Tree Boosting Time series model (MSTS–GTB), and multi-source data Random Forest Time series model (MSTS–RF). The MSTS–RF model was the best performer, with a validation RMSE of 20.50 and an R² of 0.71. The input data for the MSTS–RF model consisted of spectral indices (NDVI, NDWI, NDBI, etc.), radar features (VH-band and VV-band), texture features, and Sentinel-2 bands synthesized as a monthly time series from May to September 2021. The feature importance analysis highlights key features for disease identification: the NIR band (B8) for Sentinel-2, DVI, SAVI, and the VH band for Sentinel-1. Notably, the blue band data (458–523 nm) were critical during the month of May. These features are related to vegetation health and soil moisture are critical for early detection. This study presents for the first time a large-scale map of PLB distribution in China with an accuracy of 10 m and an RMSE of 26.52. The map provides valuable decision support for agricultural disease management, demonstrating the effectiveness and practical potential of the proposed method for large-scale monitoring.
- Research Article
- 10.1029/2024gl108590
- Mar 10, 2025
- Geophysical Research Letters
- Hongyan Chen + 6 more
Abstract Solar‐terrestrial interactions are a topic of considerable interest and attention. In this paper, we introduce a new method called shift neighborhood matching correlation (SNMC) to investigate the relationship between geomagnetic storms and earthquakes across various time lags. To assess the significance of the correlations, we employ random sampling series to replace one or both of the time series. Analyzing nearly a century of data, our results reveal an increased likelihood of earthquakes following geomagnetic storms, particularly 27–28 days afterward. Conventional statistical methods confirm the significance of this correlation. However, when earthquakes are replaced with random series, the statistical significance derived from the SNMC method diminishes, highlighting the intrinsic properties of geomagnetic storm phenomena. We discuss two potential physical mechanisms to explain the correlations. While the earthquake probability gains solely based on geomagnetic storms is insufficient for reliable prediction, it may be useful in the integrated multi‐geophysical predictive models.
- Research Article
1
- 10.30574/ijsra.2025.14.2.0370
- Feb 28, 2025
- International Journal of Science and Research Archive
- Arunraju Chinnaraju
Consumer segmentation and targeting are essential for precision marketing, allowing businesses to deliver personalized experiences. The article explores the transformative role of autonomous AI agents in enhancing consumer segmentation and targeting within the data-driven marketing landscape. The proposed framework integrates machine learning (ML), natural language processing (NLP), and predictive analytics to continuously optimize segmentation models, enabling real-time targeting and hyper-personalization without human oversight. Autonomous agents dynamically manage segmentation by leveraging unsupervised learning algorithms, including K-means and DBSCAN, to refine clusters and discover complex micro-segments based on evolving consumer behavior and preferences. The AI agents use reinforcement learning to enhance campaign management through continuous feedback loops. By monitoring real-time performance metrics, such as click-through rates and conversions, they dynamically adjust ad spend, resource allocation, and personalized content delivery across digital channels. Predictive models, including Random Forests and time series analysis, further support real-time consumer behavior forecasting. This automation reduces operational inefficiencies, speeds up decision-making, and ensures marketing strategies remain relevant and adaptive. Ethical considerations, including data privacy and algorithmic fairness, are integral to the framework, promoting responsible AI deployment. Case studies from industries such as e-commerce and streaming illustrate significant improvements in campaign efficiency, customer engagement, and return on investment. Autonomous AI enables scalable, data-driven solutions that give businesses a competitive edge in rapidly changing markets.
- Research Article
- 10.1007/s11425-024-2352-9
- Feb 25, 2025
- Science China Mathematics
- Jiale Chen + 2 more
Littlewood-type theorems for random Dirichlet series
- Research Article
- 10.25686/2306-2819.2024.4.68
- Feb 21, 2025
- Vestnik of Volga State University of Technology. Series Radio Engineering and Infocommunication Systems
- Р.Р Раупов + 3 more
Разработаны программные средства моделирования двоичных последовательностей, формируемых генераторами псевдослучайных чисел по ГОСТ Р ИСО 28640 – 2012, для их дальнейшего использования в стендах полунатурного моделирования асинхронных радиоэлектронных систем. С помощью системы статистических тестов NIST проведено исследование статистических свойств сформированных разработанными генераторами псевдослучайных последовательностей и проведена их проверка на случайность посредством сравнения со статистикой идеально случайного ряда. По результатам исследования выбран пороговый уровень прохождения тестов NIST и выработаны инженерные рекомендации по выбору генератора псевдослучайных чисел с наибольшим количеством успешно пройденных тестов. Установлено, что для стендов полунатурного моделирования асинхронных радиоэлектронных систем рекомендуется применение генератора псевдослучайных чисел на основе метода Мерсенна Твистера. Также установлена необходимость совершенствования разработанных генераторов путём подбора входных параметров с целью повышения вероятности прохождения тестов. Introduction. To accurately reproduce test conditions in semi-natural simulation stands, it is essential to generate signals with various probability distributions over power and time. Pseudorandom number generators (PRNGs) are commonly used for this purpose. Given their practical application in semi-natural simulation stands, evaluating the randomness of their generated sequences is crucial. The NIST statistical test suite is widely used to assess the randomness of number sequences. It determines the statistical similarity between a generated pseudorandom sequence and a theoretically perfect random sequence. This study aims at analyzing the performance of binary sequences generated by PRNGs—defined by GOST R ISO 28640-2012—in NIST randomness tests using a developed simulation software. Methods and Tools. This research involves developing software tools to simulate PRNGs based on GOST R ISO 28640-2012 using the following methods: five-parameter method; Tausworthe method; combined Tausworthe method; Mersenne Twister method. To assess the randomness of these generators, their output sequences were subjected to 16 NIST statistical tests. Each test evaluated finite-length sequences, calculating statistics and comparing them with reference statistics of a perfectly random sequence. Results. The study found that the Mersenne Twister method generated the sequence most closely resembling a true random series, passing 14 out of 16 NIST tests. Furthermore, it was observed that modifying the input parameters of certain PRNG methods could enhance their ability to pass specific tests, thereby improving the overall "randomness" of the generated sequences. Conclusion. This research developed and implemented software tools for modeling PRNGs based on GOST R ISO 28640-2012 and evaluated their randomness using the NIST test suite. The results indicate that the Mersenne Twister method is the most suitable PRNG for semi-natural simulation stands in asynchronous electronic systems. Additionally, further improvements to PRNG algorithms are necessary to increase their statistical test pass rates.
- Research Article
- 10.3390/jpm15020063
- Feb 7, 2025
- Journal of personalized medicine
- Masab Mansoor + 1 more
Background: The rapid adoption of telehealth services for youth mental health care necessitates a comprehensive evaluation of its effectiveness. This study aimed to analyze the impact of telehealth on youth mental health outcomes using artificial intelligence techniques applied to large-scale public health data. Methods: We conducted an AI-driven analysis of data from the National Survey on Drug Use and Health (NSDUH) and other SAMHSA datasets. Machine learning techniques, including random forest models, K-means clustering, and time series analysis, were employed to evaluate telehealth adoption patterns, predictors of effectiveness, and comparative outcomes with traditional in-person care. Natural language processing was used to analyze sentiment in user feedback. Results: Telehealth adoption among youth increased significantly, with usage rising from 2.3 sessions per year in 2019 to 8.7 in 2022. Telehealth showed comparable effectiveness to in-person care for depressive disorders and superior effectiveness for anxiety disorders. Session frequency, age, and prior diagnosis were identified as key predictors of telehealth effectiveness. Four distinct user clusters were identified, with socioeconomic status and home environment strongly associated with positive outcomes. States with favorable reimbursement policies saw a 15% greater increase in youth telehealth utilization and 7% greater improvement in mental health outcomes. Conclusions: Telehealth demonstrates significant potential in improving access to and effectiveness of mental health services for youth. However, addressing technological barriers and socioeconomic disparities is crucial to maximize its benefits.
- Research Article
- 10.1093/imrn/rnaf007
- Jan 24, 2025
- International Mathematics Research Notices
- Stamatis Dostoglou + 1 more
Abstract We estimate non-asymptotically the probability of uniform clustering around the unit circle of the zeros of the $[m,n]$-Padé approximant of a random power series $f(z) = \sum _{j=0}^\infty a_{j} z^{j}$ for $a_{j}$ independent, with finite first moment, and Lévy function $ {\mathcal L}(a_{j}, \varepsilon ) =\sup _{t\in \mathbb R} \mathbb{P}(|a_{j} -t|<\varepsilon )$ satisfying ${\mathcal L}(a_{j}, \varepsilon ) \leq K\varepsilon $. Under the same assumptions we show that almost surely $f$ has infinitely many zeros in the unit disc, with the unit circle serving as a natural boundary for $f$. For $R_{m}$ the radius of the largest disc containing at most $m$ zeros of $f$, the poles of the $[m,n]$-Padé approximant almost surely cluster uniformly at the circle of radius $R_{m}$ as $n \to \infty $ and $m$ stays fixed, and we provide almost sure rates of converge of these $R_{m}$’s to $1$. We also show that our results on the clustering of the zeros hold for log-concave vectors $(a_{j})$ with not necessarily independent coordinates.
- Research Article
- 10.1016/j.procs.2025.01.004
- Jan 1, 2025
- Procedia Computer Science
- Suman Chowdhury + 2 more
Hydroelectric Power Potentiality Analysis for the Future Aspect of Trends with R2 Score Estimation by XGBoost and Random Forest Regressor Time Series Models
- Research Article
- 10.1016/j.jenvman.2024.123746
- Jan 1, 2025
- Journal of environmental management
- Shuangjun Liu + 3 more
Balanced hydropower and ecological benefits in reservoir-river-lake system: An integrated framework with machine learning and game theory.
- Research Article
- 10.15559/25-vmsta276
- Jan 1, 2025
- Modern Stochastics: Theory and Applications
- Alexander Iksanov + 1 more
Buraczewski et al. (2023) proved a functional limit theorem (FLT) and a law of the iterated logarithm (LIL) for a random Dirichlet series ${\textstyle\sum _{k\ge 2}}\frac{{(\log k)^{\alpha }}}{{k^{1/2+s}}}{\eta _{k}}$ as $s\to 0+$, where $\alpha \gt -1/2$ and ${\eta _{1}},{\eta _{2}},\dots $ are independent identically distributed random variables with zero mean and finite variance. A FLT and a LIL are proved in a boundary case $\alpha =-1/2$. The boundary case is more demanding technically than the case $\alpha \gt -1/2$. A FLT and a LIL for ${\textstyle\sum _{p}}\frac{{\eta _{p}}}{{p^{1/2+s}}}$ as $s\to 0+$, where the sum is taken over the prime numbers, are stated as the conjectures.
- Research Article
- 10.1051/itmconf/20257204009
- Jan 1, 2025
- ITM Web of Conferences
- Alena Stupina + 1 more
The article observes the methods of forecasting time series Autoregressive Moving Average Model (ARMA) and Integrated Autoregressive Moving Average Model (ARIMA). The ARIMA model differs from the ARMA model only in that forecasting is performed not on absolute values of series levels, but on differences of order d, which makes it possible to apply to non-stationary time series. Financial time series are traditionally considered non-stationary. However, the Hurst exponent less than or equal to 0.5 indicates a random or anti-persistent nature of time series. The paper assumes that for random or anti-persistent time series according to Hurst, there is no need to take differences and it is sufficient to apply the ARMA model for forecasting. To test the hypothesis, we carried out forecasts of leading world indices stocks and currency pairs with the Hurst exponent less than or equal to 0.5 for 10 years using the ARMA and ARIMA methods and compared the results using the MAPE metric. According to the ARMA method forecasts, in most cases the error was smaller, that confirmed the initial hypothesis.
- Research Article
- 10.17721/1812-5409.2025/1.6
- Jan 1, 2025
- Bulletin of Taras Shevchenko National University of Kyiv. Series: Physics and Mathematics
- Oleg Makarchuk + 1 more
According to the Jessen-Wintner theorem, the sum of a convergent with probability unity random series with independent discretely distributed terms has a pure Lebesgue type of distribution, i.e. discrete, absolutely continuous, or singular. Levi's theorem gives necessary and sufficient conditions for the corresponding distribution (Jessen-Wintner type distribution) to be discrete. In the general case, the problem of finding criteria for absolute continuity (singularity) of Jessen-Wintner type distributions is complex, therefore it is considered by researchers in certain classes. An important class of distributions that satisfy the Jessen-Wintner theorem are classical Bernoulli convolutions and their corresponding generalizations (generalized Bernoulli convolutions), which have been the subject of research by many scientists. Interest in the corresponding problem is still high. Special difficulties in the study of the Lebesgue structure of Jessen-Wintner type distributions, in particular in the class of generalized Bernoulli convolutions, arise for the case of generalized Bernoulli convolutions with substantial overlaps. For generalized Bernoulli convolutions with substantial overlaps, classical approaches of measure theory, based in particular on the Kakutani theorem, in many cases allow obtaining only certain sufficient conditions for the absolute continuity (singularity) of the corresponding distribution function. In addition to the classical problem of deepening the Jessen-Wintner theorem for generalized Bernoulli convolutions, the following are relevant: the problem of studying the topological-metric structure of the distribution spectrum, fractal analysis of the distribution supports, asymptotic properties of the characteristic function at infinity, and others. The paper specifies necessary and sufficient conditions for the absolute continuity and singularity of the distribution of one class of generalized symmetric Bernoulli convolutions. The corresponding results are a generalization of the results with respect to the Lebesgue structure of the distribution of a random variable with independent s-digits. The emphasis is also placed on proving that the spectrum of the corresponding class of distributions considered in the paper has a positive Lebesgue measure.
- Research Article
- 10.54254/2754-1169/2024.18817
- Dec 26, 2024
- Advances in Economics, Management and Political Sciences
- Yunya Xia
This article examines the critical roles of Random Forest (RF) and Time Series (TS) models in forecasting China's crude oil futures prices, providing a comprehensive comparison of their predictive capabilities. The study employs ARIMA and SARIMA models, known for their proficiency in capturing data trends and seasonality, to harness the temporal aspects of oil price movements. In contrast, the RF model is recognized for its robustness in handling complex datasets, offering a nuanced approach to non-linear relationships and variable interactions. The analysis reveals that the ARIMA (0,1,4) model outperforms the (1,1,0) model in terms of prediction error and statistical fitting. However, the RF model's strength lies in its precision and flexibility, particularly in responding to market fluctuations. The paper concludes with the insight that ARIMA models are more suitable for long-term strategic planning due to their stability, whereas RF models excel in short-term forecasting and high-accuracy prediction scenarios. These findings are invaluable for market participants, offering them data-driven strategies to optimize their decision-making processes in the volatile oil futures market.