Related Topics
Articles published on Probability measure
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
12338 Search results
Sort by Recency
- Research Article
- 10.1007/s11222-026-10855-3
- Mar 9, 2026
- Statistics and Computing
- Sahani Pathiraja + 2 more
Abstract Sequential filtering and spatial inverse problems assimilate data points distributed either temporally (in the case of filtering) or spatially (in the case of spatial inverse problems). Sometimes it is possible to choose the position of these data points (which we call sensors here) in advance, with the goal of maximising the expected information gain (or a different metric of performance) from future data, and this leads to an Optimal Experimental Design (OED) problem. Here we revisit an interpretation of optimising sensor placement as an integration with respect to a general probability measure $$\xi $$ ξ . This generalises the problem of discrete-time sensor placement (which corresponds to the special case where the probability measure is a mixture of Diracs) to an infinite-dimensional, but mathematically more well-behaved setting. We focus on the continuous-time stochastic filtering setting, whose solution is governed by the Zakai equation. We derive an expression for the Fréchet derivative of a general OED utility functional, the key to which is an adjoint (backwards in time) differential equation. This paves the way for utilising new gradient-based methods for solving the corresponding optimisation problem, as a potentially more efficient alternative to (semi-)discrete optimisation methods, e.g. based on greedy insertion and deletion of sensor placements.
- Research Article
- 10.1007/s10107-026-02339-z
- Mar 9, 2026
- Mathematical Programming
- Renjie Chen + 2 more
Abstract Finding an approximation of the inverse of the covariance matrix, also known as precision matrix, of a random vector with empirical data is widely discussed in finance and engineering. In data-driven problems, empirical data may be “contaminated”. This raises the question as to whether the approximate precision matrix is reliable from a statistical point of view. In this paper, we concentrate on a much-noticed sparse estimator of the precision matrix and investigate the issue from the perspective of distributional stability. Specifically, we derive an explicit local Lipschitz bound for the distance between the distributions of the sparse estimator under two different distributions (regarded as the true data distribution and the distribution of “contaminated” data). The distance is measured by the Kantorovich metric on the set of all probability measures on a matrix space. We also present analogous results for the standard estimators of the covariance matrix and its eigenvalues. Furthermore, we discuss several applications and conduct some numerical experiments.
- Research Article
- 10.1007/s40590-026-00877-2
- Mar 9, 2026
- Boletín de la Sociedad Matemática Mexicana
- Óscar Vega-Amaya + 1 more
On compactness properties of subsets of probability measures on metric spaces
- Research Article
- 10.1007/s10687-025-00523-6
- Mar 3, 2026
- Extremes
- Nicolas Dietrich
Abstract Investigating the level sets $$\varvec{L^t}$$ of Archimax copulas $$\varvec{C \in \mathcal {C}_{am}}$$ , we establish that these sets can be characterized in terms of certain convex functions $$\varvec{f^s}$$ and non-decreasing functions $$\varvec{g^t}$$ . Motivated by the results in Mai and Scherer (Extremes 14 , 311-324 2011) and Trutschnig et al. (Extremes 19 , 405-427 2016), which examine the way bivariate Extreme Value copulas distribute their mass, we extend these findings to the larger family of bivariate Archimax copulas $$\varvec{{C}_{am}}$$ . Working with Markov kernels (conditional distributions), we analyze the mass distributions of Archimax copulas $$\varvec{C \in \mathcal {C}_{am}}$$ and show that the support of $$\varvec{C}$$ is determined by some functions $$\varvec{f^0}$$ , $$\varvec{g^L}$$ , and $$\varvec{g^R}$$ . Additionally, we prove that the discrete component (if any) of $$\varvec{C}$$ concentrates its mass on the graphs of the afore-mentioned functions $$\varvec{f^s}$$ or $$\varvec{g^t}$$ . Recognizing the close relationship between the level sets $$\varvec{L^t}$$ of a copula $$\varvec{C}$$ and its Kendall distribution function $$\varvec{F_C^K}$$ , we provide an alternative proof for the representation of $$\varvec{F_C^K}$$ for arbitrary Archimax copulas $$\varvec{C \in \mathcal {C}_{am}}$$ and derive simple expressions for the level set masses $$\varvec{\mu _C(L^t)}$$ . Building upon the fact that Archimax copulas $$\varvec{C \in \mathcal {C}_{am}}$$ can be represented via two univariate probability measures $$\gamma$$ and $$\varvec{\vartheta }$$ — so-called Williamson and Pickands dependence measures — we show that absolute continuity, discreteness, and singularity properties of these measures $$\varvec{\gamma }$$ and $$\varvec{\vartheta }$$ carry over to the corresponding Archimax copula $$\varvec{C_{\gamma , \vartheta }}$$ . Finally, we derive conditions on $$\varvec{\gamma }$$ and $$\varvec{\vartheta }$$ such that the support of the absolutely continuous, discrete, or singular component of $$\varvec{C_{\gamma , \vartheta }}$$ coincides with the support of $$\varvec{C_{\gamma , \vartheta }}$$ .
- Research Article
- 10.1016/j.apal.2026.103737
- Mar 1, 2026
- Annals of Pure and Applied Logic
- Simon M Huttegger + 2 more
Algorithmic randomness and the weak merging of computable probability measures
- Research Article
- 10.3390/math14050762
- Feb 25, 2026
- Mathematics
- David Carfí + 2 more
This paper develops a practical computational framework for the Bayesian Cournot model with bilateral incomplete cost information, where each player is uncertain about the opponent’s marginal cost, drawn from a continuous compact interval [c*, c*] with 0<c*<c*<∞. The infinite dimensionality of the functional strategy spaces (mappings from types to production quantities) renders analytical closed-form solutions infeasible in this continuous-type setting. To overcome this challenge, we restrict the strategy spaces to finite-dimensional differentiable sub-manifolds—specifically, one-parameter families of oscillatory functions (cosine, sine, and mixed forms). After suitable affine Q-rescaling to map the oscillatory range into the production interval [0, Q], and with parameter ranges satisfying α, β>(π/2)/c*, these curves ensure near-exhaustivity: the joint production map (α, β)↦(xα(s), yβ(t)) covers [0, Q]2 densely for every fixed cost pair (s, t), thereby recovering (up to density and closure) the full ex-post payoff space. We introduce the ex-post payoff mapping Φ(s, t, x, y)=(es(x, y)(t), ft(x, y)(s)), which collects every realizable payoff pair once nature draws the types and players select their strategies. The image of Φ defines the general payoff space of the game, and its non-dominated points constitute the general ex-post Pareto frontier—all efficient realized outcomes across type-strategy realizations, without dependence on private probability measures over types. Using multi-objective genetic algorithms, we numerically approximate this frontier (and selected collusive compromises) within the restricted but representative sub-manifolds. The resulting frontiers are computationally accessible, robust to parameter variations, and validated through hypervolume convergence, sensitivity analysis, and comparisons with NSGA-II, PSO and scalarization methods. The findings are significant because they provide decision-makers in oligopolistic markets (e.g., electric vehicles) with viable, implementable production policies that explore efficient trade-offs under genuine cost uncertainty, without requiring explicit forecasts of the opponent’s type distribution—a limitation of traditional expected-utility approaches. By focusing on ex-post efficiency, the method reveals belief-independent compromise solutions that may guide tacit coordination or collusive outcomes in real-world strategic settings.
- Research Article
- 10.28924/ada/stat.6.3
- Feb 24, 2026
- European Journal of Statistics
- Alivia F Zahro + 2 more
Stunting in toddlers is a significant public health problem in Indonesia due to its potential to inhibit child development and cause long-term adverse effects. Clustering the prevalence of stunting provides valuable insights for designing effective prevention policies. This study employs the Possibilistic Fuzzy C-Means (PFCM) method, validated using the Modified Partition Coefficient (MPC) index, to cluster stunting prevalence in Indonesia. The PFCM method integrates Fuzzy C-Means (FCM) and Possibilistic C-Means (PCM), balancing membership degrees with probabilistic measures. The primary advantages of this method are its capability to handle data with uncertain membership degrees, robustness against noise, and flexibility in defining probabilistic membership values. The results obtained show that clusters with high stunting prevalence are dominated by nine provinces, namely Aceh, Jambi, Bengkulu, Bangka Islands, Central Kalimantan, Central Sulawesi, Gorontalo, West Papua, and Papua. The MPC validity score of 0.704 confirms the effectiveness of the PFCM method in categorizing stunting prevalence well, making it a robust tool to support policymaking in stunting prevention efforts.
- Research Article
- 10.1080/03605302.2026.2618765
- Feb 19, 2026
- Communications in Partial Differential Equations
- Hong-Bin Chen
Recently, it was demonstrated that, if it exists, the limit free energy of possibly non-convex spin glass models must be determined by a characteristic of the associated infinite-dimensional non-convex Hamilton-Jacobi equation. In this work, we investigate a similar theme purely from the perspective of PDEs. Specifically, we study the unique viscosity solution of the aforementioned equation and derive an envelope-type representation formula for the solution, in the form proposed by Evans. The value of the solution is expressed as an average of the values along characteristic lines, weighted by a non-explicit probability measure. The technical challenges arise not only from the infinite dimensionality but also from the fact that the equation is defined on a closed convex cone with an empty interior, rather than on the entire space. In the introduction, we provide a description of the motivation from spin glass theory and present the corresponding results for comparison with the PDE results.
- Research Article
- 10.1090/mcom/4186
- Feb 17, 2026
- Mathematics of Computation
- Yifan Chen + 4 more
Sampling a target probability distribution with an unknown normalization constant is a fundamental challenge in computational science and engineering. Recent work shows that algorithms derived by considering gradient flows in the space of probability measures open up new avenues for algorithm developments. This paper makes three contributions to this approach to sampling, by scrutinizing the design components of such gradient flows. Any instantiation of a gradient flow for sampling needs an energy functional and a metric to determine the flow, as well as numerical approximations of the flow to derive algorithms. Our first contribution is to show that the Kullback-Leibler (KL) divergence, as an energy functional, has the unique property (among all f f -divergences) that gradient flows resulting from it do not depend on the normalization constant of the target distribution; this justifies the widespread use of the KL divergence in sampling. Our second contribution is to study the choice of metric from the perspective of invariance. The Fisher-Rao metric is known as the unique choice (up to scaling) that is diffeomorphism invariant. As a computationally tractable alternative, we introduce a relaxed, affine invariance property for the metrics and gradient flows. In particular, we construct various affine invariant Wasserstein and Stein gradient flows. Affine invariant gradient flows are shown to behave more favorably than their non-affine-invariant counterparts when sampling highly anisotropic distributions, in theory and by using particle methods. Our third contribution is to study, and develop efficient algorithms based on Gaussian approximations of the gradient flows; this leads to an alternative to particle methods. We establish connections between various Gaussian approximate gradient flows, discuss their relation to gradient methods arising from parametric variational inference, and study their convergence properties. Our theory and numerical experiments demonstrate the strengths and potential limitations of the Gaussian approximate Fisher-Rao gradient flow, which is affine invariant, by considering a wide range of target distributions.
- Research Article
- 10.1111/jtsa.70045
- Feb 13, 2026
- Journal of Time Series Analysis
- Yiye Jiang + 1 more
ABSTRACT This paper is focused on the statistical analysis of data consisting of a collection of multiple series of probability measures that are indexed by distinct time instants and supported over a bounded interval of the real line. By modeling these time‐dependent probability measures as random objects in the Wasserstein space, we propose a new auto‐regressive model for the statistical analysis of multivariate distributional time series. Using the theory of iterated random function systems, results on the second‐order stationarity of the solution of such a model are provided. We also propose a consistent estimator for the autoregressive coefficients of this model. Due to the simplex constraints that we impose on the model coefficients, the proposed estimator that is learned under these constraints naturally has a sparse structure. The sparsity allows the application of the proposed model in learning a graph of temporal dependency from multivariate distributional time series. We explore the numerical performances of our estimation procedure using simulated data. To shed some light on the benefits of our approach for real data analysis, we also apply this methodology to two data sets, respectively made of observations from age distribution in different countries and those from the bike sharing network in Paris.
- Research Article
- 10.1080/10485252.2026.2620120
- Feb 10, 2026
- Journal of Nonparametric Statistics
- José A Perusquía + 2 more
The nonparametric view of Bayesian inference has transformed statistics and many of its applications. The canonical Dirichlet process and other more general families of nonparametric priors have served as a gateway to solve frontier uncertainty quantification problems of large, or infinite, nature. This success has been greatly due to available constructions and representations of such distributions. Hence, understanding their distributional features and how different random probability measures compare among themselves is a key ingredient for their proper application. In this paper, we analyse the discrepancy among some relevant nonparametric priors. Initially, we compute the mean and variance of the random Kullback-Leibler divergence between the Dirichlet process and the geometric process. Subsequently, we extend our analysis to encompass a broader class of exchangeable stick-breaking processes, which includes the Dirichlet and geometric processes as extreme cases. Our results establish quantitative conditions where all the aforementioned priors are close in total variation distance.
- Research Article
- 10.1080/07362994.2026.2616052
- Feb 7, 2026
- Stochastic Analysis and Applications
- Yixing Zhao + 5 more
. Recent regulatory overhauls in North America and Europe require insurers the detailed revision of capital requirements across multiple insurance product domains. These cover unique challenges akin to guaranteed minimum benefits in variable annuities and risk correlations. This article addresses the urgent need to establish robust pricing methodologies for option-embedded guarantees. Our focus is on the pricing of a guaranteed annuity option (GAO), which offers investors with both growth prospects and downside protection. We propose a stochastic correlation framework to capture the dynamic dependence between financial and longevity risks. When the traditional Monte-Carlo method is used as a baseline, our change of probability measures approach not only generates accurate GAO values but also features a remarkably efficient computation. An analysis of the magnitude and direction of the impact of the model parameters on GAO prices is also presented. Both the theoretical and applied contributions of this article have central importance to insurers and regulators alike and to the concerted efforts in sustaining the insurance sector’s stability and consumer protection.
- Research Article
- 10.3390/math14030564
- Feb 4, 2026
- Mathematics
- Priya Mittal + 2 more
This article presents a model for pricing an exchange option considering stochastic volatility and liquidity risk. The impact of liquidity risk on an asset price is considered by utilizing a liquidity discount process that is influenced by both market and asset-specific liquidity. Girsanov’s theorem is applied to transform from the real-world probability measure to equivalent probability measures, such as the risk-neutral probability measure. The Feynman–Kac theorem is applied to transform the exchange option pricing formula into the vanilla option pricing formula. The analytical expression is derived through the characteristic function approach. The accuracy of the proposed formula is validated through comparisons with Monte Carlo simulation, where the relative error remains below 0.93% across different values of S(0) and τ. Furthermore, numerical experiments highlight that incorporating liquidity risk leads to higher option prices. As the maturity increases from 0.1 to 2.0, the percentage gap between the option prices increases from 1.65% to 20.2%. Finally, sensitivity analysis is conducted to examine the influence of various parameters and to demonstrate the impact of stochastic volatility and liquidity in exchange option valuation.
- Research Article
- 10.1007/s11868-025-00759-7
- Feb 4, 2026
- Journal of Pseudo-Differential Operators and Applications
- Anselmo Torresblanca-Badillo + 2 more
Abstract We present a unified spectral framework for modeling discrete diffusion and Markovian dynamics on finite abelian groups via convolution semigroups, negative definite functions and pseudo-differential operators. Using discrete Fourier analysis on $$\mathbb {Z}_{N}$$ Z N , we construct families of probability measures whose evolution encodes diffusion processes governed by Feller semigroups. The associated pseudo-differential operators are shown to be m -dissipative and self-adjoint, ensuring stability and well-posedness. Explicit symbolic generators based on quadratic dispersion are introduced, revealing deep connections between harmonic structures and stochastic evolution. This approach not only advances the theory of discrete diffusion on algebraic groups but also opens avenues toward non-abelian and ultrametric generalizations, with applications in signal processing, graph-based modeling, and numerical analysis.
- Research Article
- 10.1088/2399-6528/ae3ec9
- Feb 1, 2026
- Journal of Physics Communications
- Vitaliy Kapytin + 3 more
Abstract Detecting and localizing transient regime transitions in nonlinear and nonstationary time series remains a major challenge in monitoring and forecasting the behaviour of complex natural and engineered systems. In this work, we formalize and evaluate the D-index (Dynamic Energy Redistribution Index) as an energy-based dynamical diagnostic that quantifies the normalized temporal rate of change of an energy-like functional of a signal. The D-index is conceptually motivated by a quantity originally introduced within structured-systems mechanics to characterize irreversible internal energy production, but is used here strictly in an operational sense for empirical signal analysis. Rather than replacing probabilistic complexity measures derived from Shannon’s information theory, the D-index provides a complementary, energy-aware perspective by revealing localized activation and relaxation phases, burst-like episodes, and transient regime shifts that may remain weakly expressed in probability-based entropies. A computational framework with adaptive window selection is developed to ensure applicability across multiple temporal scales. Experiments on synthetic benchmarks and real observational data, including GOES X-ray flux and GNSS-derived vertical total electron content (VTEC), demonstrate that the D-index reliably identifies transitions between operating regimes, phase-change intervals, and ionospheric responses to solar terminator forcing while remaining robust to moderate and even severe noise contamination. These results confirm the practical value of the D-index as a diagnostic indicator for nonstationary signal analysis, with direct relevance to operational monitoring, early-warning detection, and predictive analytics in complex systems.
- Research Article
- 10.1115/1.4070950
- Jan 22, 2026
- Journal of Offshore Mechanics and Arctic Engineering
- Mojtaba Mokhtari + 1 more
Abstract Through-wall pitting is the most common failure mode of concern for high-pressure pipelines as used in the oil and gas industry, both offshore and onshore, potentially allowing loss of containment and environmental pollution. Pipe burst under pressure also is of concern, particularly in high-safety class pipelines. Semi-empirical models for predicting pipeline burst capacity vary in their fidelity. Mostly this has been estimated by benchmarking burst capacity prediction models of undefined conservatism against burst pressure derived from finite element analysis (FEA) of steel pipes with wall defects such as caused by corrosion. The most recent of these comparative assessments are reviewed herein, and areas of concern are noted. This is followed by a statistical analysis for performance assessment of several burst capacity models, by comparing their predictions against FEA-generated data, for which both the modelling and the input data appear to have been correctly applied. The results of the analyses allow a more accurate ranking of burst capacity models, considering both predicted mean values and estimates of variability, and provide a basis for making logically consistent comparisons. They also form a basis for application of safety factors to provide measures of the relative probability of pressure pipe bursts.
- Research Article
- 10.1142/s0219530526500235
- Jan 20, 2026
- Analysis and Applications
- Nathaël Da Costa + 3 more
Gaussian processes (GPs) are the most common formalism for defining probability distributions over spaces of functions. While applications of GPs are myriad, a comprehensive understanding of GP sample paths, i.e. the function spaces over which they define a probability measure, is lacking. In practice, GPs are not constructed through a probability measure, but instead through a mean function and a covariance kernel. In this paper we provide necessary and sufficient conditions on the covariance kernel for the sample paths of the corresponding GP to attain a given regularity. We focus primarily on Hölder regularity as it grants particularly straightforward conditions, which simplify further in the cases of stationary and isotropic GPs. We then demonstrate that our results allow for novel and unusually tight characterizations of the sample path regularities of the GPs commonly used in machine learning applications, such as the Matérn GPs.
- Research Article
- 10.1007/s00025-026-02594-8
- Jan 20, 2026
- Results in Mathematics
- Ines Adouani
Cubic Hermite Splines on the Hilbert manifold of Probability Measures
- Research Article
- 10.1177/09622802251414594
- Jan 20, 2026
- Statistical methods in medical research
- Asael Fabian Martínez + 2 more
The identification of latent profile trajectories in longitudinal studies represents an important challenge for specialists since they could provide insights to better understand their problem of interest. The majority of the statistical methodologies for cluster analysis for longitudinal data are based on growth curve or mixed-effects models, and often incorporate covariates for a better adjustment. In particular, for Bayesian nonparametric methods, Dirichlet process mixture models are widely used together. We propose a clustering methodology for longitudinal data based on mixture models generated by a discrete random probability measure whose weights are decreasingly ordered by construction. Additionally, data is modeled without making use of covariates and assuming independence across time for individual measurements. Our approach also provides a straightforward procedure to merge some estimated groups, since it could happen that there are many of them, to be easily explained by experts. Our results suggest that, at least for a first analysis, this framework is enough to effectively detect groups in the data; further exploration of each group could incorporate extra information. We apply our methodology for detecting adiposity trajectories in Mexican children in a secondary analysis of the "Prenatal Omega-3 fatty acid Supplementation and Child Growth and Development" study (POSGRAD) cohort.
- Research Article
- 10.1017/etds.2025.10267
- Jan 20, 2026
- Ergodic Theory and Dynamical Systems
- Nasab Yassine
Abstract In this paper, we study the quantitative recurrence properties in the case of $\mathbb {Z}$ -extension of Axiom A flows on a Riemannian manifold. We study the asymptotic behavior of the first return time to a small neighborhood of the starting point. We establish results of almost everywhere convergence, and of convergence in distribution with respect to any probability measure absolutely continuous with respect to the infinite invariant measure. In particular, our results apply to geodesic flows on the $\mathbb {Z}$ -cover of compact smooth surfaces of negative curvature.