Published in last 50 years
Articles published on Probability Theory
- New
- Research Article
- 10.1186/s40677-025-00333-9
- Nov 6, 2025
- Geoenvironmental Disasters
- Kyrillos Ebrahim + 3 more
Abstract Introduction and Research Gap This study presents a comprehensive framework for predicting volumetric water content (VWC) to mitigate shallow, rainfall-induced landslides, bridging existing gaps in the literature. Methodology The framework synergistically integrates the empirical strengths of deep learning (DL) with the physical dynamics of the VWC subsurface behavior. Statistical, shallow machine learning (ML), and DL models were investigated with optimization techniques and sensitivity analyses to establish benchmarks for comparison and derive optimal predictions. DL and probability theory enable both point and interval predictions. Findings Validation on the Pa Mei landslide demonstrates strong performance with mean absolute errors (MAE) ranging from 0.35% to 1.22% and Predicted Interval Coverage Probabilities (PICP) from 0.86 to 0.91. Predicted VWC deviations were propagated into Factor of Safety (FOS) calculations, yielding robust performance metrics with R 2 and PICP of 0.89 and 0.85, respectively. Transferability is demonstrated at the Tung Chung landslide, where MAE ranges from 0.36% to 1.25% and PICP from 0.86 to 0.95. Significance This framework demonstrates improved accuracy and introduces a practical data-sharing mechanism to address monitoring challenges such as power consumption and data loss, offering a robust tool for hazard mitigation and decision support.
- New
- Research Article
- 10.1007/s13194-025-00690-0
- Nov 3, 2025
- European Journal for Philosophy of Science
- Joel Katzav
Abstract Recent work on the epistemology of climate science includes arguments that are against probabilistic representations of uncertainty about climate and for possibilistic ones as well as some development and use of the latter. I reinstate these arguments, partly by rebutting Corey Dethier’s recent challenge to them and partly by arguing that they remain effective against recent improvements to probabilistic representations. Recognising, however, that the case for possibilistic representations can be undermined by problematic interpretations of epistemic possibilities, I set out criteria of adequacy for such interpretations in the climate context while arguing for a preferred interpretation. I criticise the appropriateness of standard interpretations, according to which a proposition is epistemically possible if and only if it is not recognised to be excluded by what is known, as well as some other prominent non-probabilistic interpretations. So too, I criticise interpretations of epistemic possibilities in terms of upper probabilities. I conclude that an interpretation of epistemic possibilities as possibilities that are consistent with knowledge that approximates the basic way things are is preferable to the other available interpretations.
- New
- Research Article
- 10.12978/jat.2023-11.001224180417
- Nov 3, 2025
- Journal of Analytic Theology
- Amy Seymour
Pruss (2016) argues that Christian philosophers should reject Open Futurism, where Open Futurism is the thesis that “there are no true undetermined contingent propositions about the future” (461). First, Pruss argues “on probabilistic grounds that there are some statements about infinite futures that Open Futurism cannot handle” (461). In other words, he argues that either the future is finite or that Open Futurism is false. Next, Pruss argues that since Christians are committed to a belief in everlasting life, they must deny that the future is finite. From here, Pruss concludes that Christians must reject Open Futurism. In practice, Pruss’s argument extends to anyone who endorses everlasting life. In this essay, I respond to Pruss’s argument on behalf of Open Futurism: pace Pruss, the open futurist can consistently believe in everlasting life while also accepting the basic principles of probability theory.
- New
- Research Article
- 10.1016/j.ins.2025.122352
- Nov 1, 2025
- Information Sciences
- Jieren Xie + 7 more
Belief permutation entropy of time series: A natural transition in analytical framework from probability theory to evidence theory
- New
- Research Article
- 10.3390/math13213416
- Oct 27, 2025
- Mathematics
- Abdulmajeed Albarrak + 2 more
In this study, we introduce a novel transformation of probability measures that unifies two significant transformations in free probability theory: the t-transformation and the Va-transformation. Our unified transformation, denoted U(a,t), is defined analytically via a modified functional equation involving the Cauchy transform, and reduces to the t-transformation when a=0, and to the Va-transformation when t=1. We investigate some properties of this new transformation from the lens of Cauchy–Stieltjes kernel (CSK) families and the corresponding variance functions (VFs). We derive a general expression for the VF resulting from the U(a,t)-transformation. This new expression is applied to prove a central result: the free Meixner family (FMF) of measures is invariant under this transformation. Furthermore, novel limiting theorems involving U(a,t)-transformation are proved providing new insights into the relations between some important measures in free probability such as the semicircle, Marchenko–Pastur, and free binomial measures.
- New
- Research Article
- 10.1007/s40509-025-00375-6
- Oct 23, 2025
- Quantum Studies: Mathematics and Foundations
- Maik Reddiger
Abstract By formulating the axioms of quantum mechanics, von Neumann also laid the foundations of a “quantum probability theory”. As such, it is regarded a generalization of the “classical probability theory” due to Kolmogorov. Outside of quantum physics, however, Kolmogorov’s axioms enjoy universal applicability. This raises the question of whether quantum physics indeed requires such a generalization of our conception of probability or if von Neumann’s axiomatization of quantum mechanics was contingent on the absence of a general theory of probability in the 1920s. This work argues in favor of the latter position. In particular, it shows how to construct a mathematically rigorous theory for non-relativistic N -body quantum systems subject to a time-independent scalar potential, which is based on Kolmogorov’s axioms and physically natural random variables. Though this theory is provably distinct from its quantum-mechanical analog, it nonetheless reproduces central predictions of the latter. Further work may make an empirical comparison possible. Moreover, the approach can in principle be adapted to other classes of quantum-mechanical models. Part II of this series discusses the empirical violation of Bell inequalities in the context of this approach. Part III addresses the projection postulate and the question of measurement.
- New
- Research Article
- 10.54254/2753-8818/2025.dl27996
- Oct 23, 2025
- Theoretical and Natural Science
- Huayuxin Chen
Machine learning (ML) has emerged as a transformative technology influencing various aspects of modern society. While often perceived as a computational discipline, its theoretical foundations are deeply rooted in mathematical principles. This paper aims to dissect the fundamental mathematical concepts that serve as the backbone of machine learning algorithms and systems. The research motivation stems from the need to understand the mathematical framework that enables machines to learn from data, moving beyond the "black box" perception of ML systems. Through analytical methodology, this paper systematically explores how core mathematical disciplines - including linear algebra, probability theory, calculus, and information theory - contribute to the development and implementation of machine learning models such as linear regression, neural networks, and Bayesian classifiers. The analysis demonstrates that linear algebra provides structural frameworks for data representation, probability theory enables uncertainty quantification, calculus and optimization facilitate the learning process, while information theory offers evaluation metrics. This study serves as a valuable resource for students and early-career researchers beginning their journey into artificial intelligence, equipping them to move beyond surface-level model deployment and engage in meaningful innovation.
- New
- Research Article
- 10.1523/jneurosci.0989-25.2025
- Oct 23, 2025
- The Journal of neuroscience : the official journal of the Society for Neuroscience
- Stephanie C Leach + 3 more
Adaptive behavior requires integrating information from multiple sources. These sources can originate from distinct channels, such as internally maintained latent cognitive representations or externally presented sensory cues. Because these signals are often stochastic and carry inherent uncertainty, integration is challenging. However, the neural and computational mechanisms that support the integration of such stochastic information remain unknown. We introduce a computational neuroimaging framework to elucidate how brain systems integrate internally maintained and externally cued stochastic information to guide behavior. Neuroimaging data were collected from healthy adult human participants (both male and female). Our computational model estimates trial-by-trial beliefs about internally maintained latent states and externally presented perceptual cues, then integrates them into a unified joint probability distribution. The entropy of this joint distribution quantifies overall uncertainty, which enables continuous tracking of probabilistic task beliefs, prediction errors, and updating dynamics. Results showed that latent-state beliefs are encoded in distinct regions from perceptual beliefs. Latent-state beliefs were encoded in the anterior middle frontal gyrus, mediodorsal thalamus, and inferior parietal lobule, whereas perceptual beliefs were encoded in spatially distinct regions including lateral temporo-occipital areas, intraparietal sulcus, and precentral sulcus. The integrated joint probability and its entropy converged in frontoparietal hub areas, notably middle frontal gyrus and intraparietal sulcus. These findings suggest that frontoparietal hubs read out and resolve distributed uncertainty to flexibly guide behavior, revealing how frontoparietal systems implement cognitive integration.Significance Statement Flexible human behavior often depends on integrating information from multiple sources, such as memory and perception, each of which can be corrupted by noise. For example, a driver must integrate traffic signals (external cues) with their destination plan (internal goals) to decide when to turn. This study reveals how the human brain integrates multiple information sources to guide flexible behavior. More specifically, distinct brain regions encode internal beliefs and external sensory representations, while frontoparietal regions integrate this information in response to input noise. These findings provide a complete account of how the brain encodes and integrates multiple inputs to guide adaptive behavior.
- New
- Research Article
- 10.47941/nsj.3272
- Oct 23, 2025
- Natural Science Journal
- Chrispine Mulenga Mwambazi + 2 more
Purpose: Decision-making under uncertainty remains a foundational challenge in cognitive science and artificial intelligence. Classical Bayesian Probability Models (CBM) often fail to explain paradoxical cognitive behaviors such as order effects, ambiguity aversion, and context-dependent reasoning. This study seeks to compare Quantum Probability Theory (QPT) and Classical Bayesian Models in their ability to capture the dynamics of human decision-making. It aims to determine which framework more accurately reflects the cognitive mechanisms underlying reasoning under uncertainty. Methodology: A qualitative, exploratory research design was adopted, involving in-depth semi-structured interviews with 16 experts across psychology, philosophy, artificial intelligence, and cognitive neuroscience. Participants were purposively selected for their theoretical and empirical expertise in probabilistic reasoning. Data were analyzed using reflexive thematic analysis, guided by the Dual-Process Theory and Busemeyer’s Quantum Cognition framework. The analysis emphasized participants’ perspectives on theoretical assumptions, cognitive plausibility, and predictive utility between QPT and CBM paradigms. Findings: Thematic findings reveal that Quantum Probability Theory offers superior explanatory power in contexts involving cognitive ambiguity, contextual dependence, and non-commutativity of mental operations. Participants consistently reported that QPT better models real-world reasoning tasks where classical logic collapses, capturing the fluid and context-sensitive nature of human judgment. Conversely, while CBM remains effective in structured, low-uncertainty scenarios, it fails to accommodate superposition and interference effects inherent in human cognition. Unique Contribution to Theory, Practice, and Policy (Recommendations): The study contributes theoretically by demonstrating how quantum probabilistic models expand existing theories of bounded rationality and probabilistic reasoning in cognitive science. Practically, it encourages interdisciplinary collaboration between cognitive scientists, AI researchers, and philosophers to refine decision models that mirror human intuition more closely. Policy-wise, the findings support the integration of quantum-inspired approaches in the design of intelligent decision-support systems and cognitive architectures. The study recommends continued empirical validation of QPT within applied domains—such as behavioral economics, machine learning, and cognitive modeling—to strengthen its predictive and explanatory robustness.
- New
- Research Article
- 10.4171/aihpd/216
- Oct 22, 2025
- Annales de l’Institut Henri Poincaré D, Combinatorics, Physics and their Interactions
- Adrián Celestino + 1 more
The double tensor Hopf algebra has been introduced by Ebrahimi-Fard and Patras to provide an algebraic framework for cumulants in non-commutative probability theory. In this paper, we obtain a cancellation-free formula, represented in terms of Schröder trees, for the antipode in the double tensor Hopf algebra. We apply the antipode formula to recover cumulant-moment formulas as well as a new expression for Anshelevich’s free Wick polynomials in terms of Schröder trees.
- New
- Research Article
- 10.3390/logics3040013
- Oct 21, 2025
- Logics
- Mark A Winstanley
Rationality has long been considered the quintessence of humankind. However, psychological experiments revealing reliable divergences in performances on reasoning tasks from normative principles of reasoning have cast serious doubt on the venerable dogma that human beings are rational animals. According to the standard picture, reasoning in accordance with principles based on rules of logic, probability theory, etc., is rational. The standard picture provides the backdrop for both the rationality and irrationality thesis, and, by virtue of the competence-performance distinction, diametrically opposed interpretations of reasoning experiments are possible. However, the standard picture rests on shaky foundations. Jean Piaget developed a psychological theory of reasoning, in which logic and mathematics are continuous with psychology but nevertheless autonomous sources of knowledge. Accordingly, logic, probability theory, etc., are not extra-human norms, and reasoners have the ability to reason in accordance with them. In this paper, I set out Piaget’s theory of rationality, using intra- and interpropositional reasoning as illustrations, and argue that Piaget’s theory of rationality is compatible with the standard picture but actually undermines it by denying that norms of reasoning based on logic are psychologically relevant for rationality. In particular, rather than logic being the normative benchmark, I argue that rationality according to Piaget has a psychological foundation, namely the reversibility of the operations of thought constituting cognitive structures.
- New
- Research Article
- 10.12797/politeja.22.2025.98.09
- Oct 21, 2025
- Politeja
- Joachim Diec
Decision analysis is considered to be one of the most important research methods in political science. The problem, however, is to determine to what extent such analysis can be reliable. The application of classical decision theory, referring to statistical research and probability theory, to the assessment of the political decision-making process is usually inadequate. This results, among other things, from the inability to obtain reliable knowledge about the initial state of the decision and the state of nature, differences in the perspectives of the assessing entities, differences in short- and long-term priorities, and above all, the impossibility of achieving several goals at the same time due to their mutual exclusion. These difficulties are generalized by the ‘uncertainty principle,’ analogous to the Heisenberg equation: the more certain priorities are realized, the less it is possible to realize others. Similarly, the use of some paths of justification excludes others in the real decision-making process. What can serve as a useful procedure, however, is the ‘expansion of time horizons,’ i.e. the gradual distancing of the time perspective of benefits.
- New
- Research Article
- 10.26636/jtit.2025.4.2261
- Oct 21, 2025
- Journal of Telecommunications and Information Technology
- Amit Kachavimath + 1 more
Software-defined networking (SDN) is now widely used in modern network infrastructures, but its centralized control design makes it vulnerable to distributed denial of service (DDoS) attacks targeting the SDN controller. These attacks are capable of disrupting the operation of the network and reducing its availability for genuine users. Existing detection and mitigation methods often suffer from numerous drawbacks, such as high computational costs and frequent false alarms, especially with standard machine learning or basic unsupervised approaches. To address these issues, a new framework is proposed that relies on multistep feature selection methods, including SelectKBest, ANOVA-F, and random forest to select the most important network features, to detect anomalies in an unsupervised manner using agglomerative clustering in order identify suspicious hosts, and to mitigate adverse impacts by relying on posterior probability and game theory. An evaluation conducted using benchmark datasets and validated through Mininet emulation demonstrates that the approach achieves better performance with silhouette scores of 0.86 for InSDN and 0.95 for Mininet. The framework efficiently computes reputation scores to distinguish malicious hosts, thus enabling to rely on adaptive defense against evolving attack patterns while maintaining minimal computational overhead.
- Research Article
- 10.1101/2025.03.21.644222
- Oct 20, 2025
- bioRxiv
- Stephanie C Leach + 3 more
Adaptive behavior requires integrating information from multiple sources. These sources can originate from distinct channels, such as internally maintained latent cognitive representations or externally presented sensory cues. Because these signals are often stochastic and carry inherent uncertainty, integration is challenging. However, the neural and computational mechanisms that support the integration of such stochastic information remain unknown. We introduce a computational neuroimaging framework to elucidate how brain systems integrate internally maintained and externally cued stochastic information to guide behavior. Neuroimaging data were collected from healthy adult human participants (both male and female). Our computational model estimates trial-by-trial beliefs about internally maintained latent states and externally presented perceptual cues, then integrates them into a unified joint probability distribution. The entropy of this joint distribution quantifies overall uncertainty, which enables continuous tracking of probabilistic task beliefs, prediction errors, and updating dynamics. Results showed that latent state beliefs are encoded in distinct regions from perceptual beliefs. Latent-state beliefs were encoded in the anterior middle frontal gyrus, mediodorsal thalamus, and inferior parietal lobule, whereas perceptual beliefs were encoded in spatially distinct regions including lateral temporo-occipital areas, intraparietal sulcus, and precentral sulcus. The integrated joint probability and its entropy converged in frontoparietal hub areas, notably middle frontal gyrus and intraparietal sulcus. These findings suggest that frontoparietal hubs read out and resolve distributed uncertainty to flexibly guide behavior, revealing how frontoparietal systems implement cognitive integration.SignificanceFlexible human behavior often depends on integrating information from multiple sources, such as memory and perception, each of which can be corrupted by noise. For example, a driver must integrate traffic signals (external cues) with their destination plan (internal goals) to decide when to turn. This study reveals how the human brain integrates multiple information sources to guide flexible behavior. More specifically, distinct brain regions encode internal beliefs and external sensory representations, while frontoparietal regions integrate this information in response to input noise. These findings provide a complete account of how the brain encodes and integrates multiple inputs to guide adaptive behavior.
- Research Article
- 10.71465/ajbd3387
- Oct 19, 2025
- American Journal Of Big Data
- Zhihao Zheng
This paper focuses on the convergence mode and limit properties of the law of large numbers, deeply analyzes the three core convergence modes of convergence by probability, convergence by almost certainty, and convergence by distribution, explores the limit properties of the Law of large numbers in independent and identically distributed and non-independent and identically distributed scenarios, and at the same time analyzes its theoretical significance and application value, aiming to provide theoretical references for research in probability theory and related fields.
- Research Article
- 10.1186/s12859-025-06272-4
- Oct 15, 2025
- BMC Bioinformatics
- Siqi Chen + 3 more
BackgroundThe identification of protein-protein interaction (PPI) plays a crucial role in understanding the mechanisms of complex biological processes. Current research in predicting PPI has shown remarkable progress by integrating protein information with PPI topology structure. Nevertheless, these approaches frequently overlook the dynamic nature of protein and PPI structures during cellular processes, including conformational alterations and variations in binding affinities under diverse environmental circumstances. Additionally, the insufficient availability of comprehensive protein data hinders accurate protein representation. Consequently, these shortcomings restrict the model’s generalizability and predictive precision.ResultsTo address this, we introduce DCMF-PPI (Dynamic condition and multi-feature fusion framework for PPI), a novel hybrid framework that integrates dynamic modeling, multi-scale feature extraction, and probabilistic graph representation learning. DCMF-PPI comprises three core modules: (1) PortT5-GAT Module: The protein language model PortT5 is utilized to extract residue-level protein features, which are integrated with dynamic temporal dependencies. Graph attention networks are then employed to capture context-aware structural variations in protein interactions; (2) MPSWA Module: Employs parallel convolutional neural networks combined with wavelet transform to extract multi-scale features from diverse protein residue types, enhancing the representation of sequence and structural heterogeneity; (3) VGAE Module: Utilizes a Variational Graph Autoencoder to learn probabilistic latent representations, facilitating dynamic modeling of PPI graph structures and capturing uncertainty in interaction dynamics.ConclusionWe conducted comprehensive experiments on benchmark datasets demonstrating that DCMF-PPI outperforms state-of-the-art methods in PPI prediction, achieving significant improvements in accuracy, precision, and recall. The framework’s ability to fuse dynamic conditions and multi-level features highlights its effectiveness in modeling real-world biological complexities, positioning it as a robust tool for advancing PPI research and downstream applications in systems biology and drug discovery.
- Research Article
- 10.37497/eaglesustainable.v15i.534
- Oct 14, 2025
- Journal of Sustainable Competitive Intelligence
- Maksym Asechko + 3 more
Purpose: to substantiate the concept of construction, develop software-algorithmic and hardware solutions that ensure increased accuracy, as well as to study the properties of an on-board navigation complex (OBC) of increased interference immunity based on the remote use of UAVs via a satellite data transmission system (NFRs). Methodology/approach: The methodological basis and research tool of this study are the methods for building models of dynamic systems, methods of statistical data processing, the theory of optimal evaluation and complex processing of navigation information, methods of simulation and semi-real-life modeling, as well as methods of full-scale testing. Originality/Relevance: The study conducted a comprehensive analysis of the effectiveness and limitations of the UAV remote control system via a satellite communication channel in the absence of an inertial navigation system (INS). A mathematical model was developed that allows estimating the maximum allowable time for remote use of the UAV without critical loss of controllability. Key findings: This study lays the foundation for the formation of measurable criteria for the effectiveness of UAV control in conditions of navigation data degradation, with an orientation towards practical implementation in combat, search and rescue or civilian high-risk conditions. Theoretical/methodological contributions: Systems analysis methods, network interaction theory, mathematical modeling methods, probability theory, machine learning methods, high-level programming theory, software testing methods.
- Research Article
- 10.1080/17442508.2025.2572632
- Oct 14, 2025
- Stochastics
- V Čekanavičius + 1 more
Compound Poisson approximation is applied to the sum of 1-dependent identically distributed random variables with random Bernoulli weights under weakened moment assumption. It is proved that, if the absolute moment of order 1 + δ , 0 < δ ≤ 1 exists, and the random variable has a density or is a lattice variable, then the accuracy of approximation frequently is of the order O ( n − δ ) . It is also proved that the accuracy O ( n − 1 ) is possible for the convolutions of scaled Poisson distribution with the normal and Skellam distributions, if the fourth finite moments exist. The results are related to aggregate claims distribution in Actuarial Mathematics and the first uniform Kolmogorov theorem in Probability Theory.
- Research Article
- 10.22331/q-2025-10-13-1880
- Oct 13, 2025
- Quantum
- David Schmid + 4 more
It is commonly believed that failures of tomographic completeness undermine assessments of nonclassicality in noncontextuality experiments. In this work, we study how such failures can indeed lead to mistaken assessments of nonclassicality. We then show that proofs of the failure of noncontextuality are robust to a very broad class of failures of tomographic completeness, including the kinds of failures that are likely to occur in real experiments. We do so by showing that such proofs actually rely on a much weaker assumption that we term relative tomographic completeness: namely, that one's experimental procedures are tomographic for each other. Thus, the failure of noncontextuality can be established even with coarse-grained, effective, emergent, or virtual degrees of freedom. This also implies that the existence of a deeper theory of nature (beyond that being probed in one's experiment) does not in and of itself pose any challenge to proofs of nonclassicality. To prove these results, we first introduce a number of useful new concepts within the framework of generalized probabilistic theories (GPTs). Most notably, we introduce the notion of a GPT subsystem, generalizing a range of preexisting notions of subsystems (including those arising from tensor products, direct sums, decoherence processes, virtual encodings, and more). We also introduce the notion of a shadow of a GPT fragment, which captures the information lost when one's states and effects are unknowingly not tomographic for one another.
- Research Article
- 10.1175/mwr-d-25-0039.1
- Oct 10, 2025
- Monthly Weather Review
- Cheng Zheng + 4 more
Abstract The Madden-Julian Oscillation (MJO) is recognized as a major source of predictability in subseasonal forecasts. Many studies investigate how the MJO modulates prediction skill, often referred to as the "forecast windows of opportunity" driven by the MJO, which can be useful for operational forecasts. In this study, we use observational data and the Community Earth System Model version 2 Large Ensemble (CESM2-LE) to explore the MJO influence on weeks 3–4 precipitation prediction skill over the contiguous United States (CONUS) with statistical prediction models. The prediction skill, represented by the Heidke skill score (HSS), shows substantial variations due to the MJO modulation for different 40-year periods, which can be well explained by probability theory. Based on the theoretical explanation, the uncertainty in the MJO modulation of the prediction skill, mostly due to the limited number of MJO events in different 40-year periods, exceeds the true MJO influence. With such a low signal-to-noise ratio, high prediction skill cannot be attributed solely to the MJO modulation but also involve constructive interference between the MJO and other climate variability. This interference is random across different time periods, thus constructive interference tends to diminish in subsequent periods, leading to a lower than expected skill in future realtime application of the prediction tool over regions where high prediction skill is identified during the satellite observed period. We emphasize the need for caution when interpreting the MJO modulation of prediction skill and recommend considering the uncertainty of the modulation highlighted in this study.