Related Topics
Articles published on Probabilistic description
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
952 Search results
Sort by Recency
- New
- Research Article
- 10.1371/journal.pcbi.1013887
- Jan 20, 2026
- PLoS computational biology
- Ryan Pellow + 1 more
Eukaryotes' genomes are organized within nuclei in three-dimensional space, forming structures such as loops, topologically associating domains (TADs), and chromosome territories. This 3D architecture impacts gene regulation and development, stress responses, and disease. However, current methods to infer these 3D structures from genomic data have multiple drawbacks, including varying outcomes depending on the resolution of the analysis and sequencing depth, qualitative outputs that limit statistical comparisons, and insufficient insight into structure frequency within samples. These challenges hinder rigorous comparisons of 3D properties across genomes, conditions, or species. To overcome these issues, we developed WaveTAD, a wavelet transform-based method that provides a resolution-free, probabilistic, and hierarchical description of 3D organization. WaveTAD generates TAD strengths, capturing variable frequency of intrachromosomal interactions within samples, and shows increased accuracy and sensitivity over existing methods. We applied WaveTAD to multiple datasets from Drosophila, mouse, and humans to illustrate new biological insights that our more sensitive and quantitative approach provides, such as the widespread presence of embryonic 3D organization before zygotic genome activation, the effect of multiple CTCF units on the stability of loops and TADs, and the association between gene expression and TAD structures in COVID-19 patients or sex-specific transcription in Drosophila.
- New
- Research Article
- 10.1063/5.0311209
- Jan 1, 2026
- Chaos (Woodbury, N.Y.)
- Adrián García-Gutiérrez + 3 more
The Largest Lyapunov Exponent (LLE) is a fundamental diagnostic of chaotic behavior in nonlinear dynamical systems, quantifying the exponential divergence of nearby trajectories. Classical computational approaches, such as Wolf's algorithm, track individual particle trajectories to estimate the LLE, but these techniques face challenges related to noise sensitivity, computational efficiency, and scalability to high-dimensional systems. This work introduces a novel variance-based methodology for computing the LLE using intrusive polynomial chaos (IPC), an uncertainty quantification technique that evolves the probability distribution of initial conditions under deterministic dynamics rather than tracking discrete trajectories. The key innovation is extracting the LLE from the exponential growth rate of ensemble variance, which connects deterministic chaos with probabilistic descriptions. Validation against the classical trajectory-based algorithm is performed on three benchmark chaotic systems: the three-dimensional Lorenz and Rössler attractors, and a six-dimensional system from Al-Azzawi and Al-Obeidi, demonstrating that the IPC approach achieves comparable accuracy and convergence rates while offering the distinct advantage of directly computing the full statistical structure of ensemble dynamics. Comparison of convergence histories, probability density functions of instantaneous Lyapunov exponents, and statistical error measures confirms excellent agreement between the proposed IPC-based methodology and established algorithms. The results indicate that variance-based LLE estimation via polynomial chaos is a robust and viable alternative to trajectory-based methods.
- New
- Research Article
- 10.7498/aps.75.20251426
- Jan 1, 2026
- Acta Physica Sinica
- Li Buwei + 7 more
<b>Aims</b> : High-resolution spectrographs are central to modern exoplanet research and are particularly effective for detecting Earth-like planets whose radial velocity (RV) signals can be only a few tens of centimeters per second. Achieving this level of precision requires highly accurate wavelength calibration. A key factor in this process is the modeling of the instrumental profile (IP), which describes the response of the spectrograph to incoming light. The true IP of a high-resolution instrument is often complex. It may show asymmetry or extended wings and change across the detector because of optical aberrations, variations in fiber illumination, and environmental effects. These features lead to systematic errors in the measured line centers when traditional parametric models such as Gaussian functions are used, and they limit the achievable RV precision.<br> <b>Methods:</b> This work introduces a non-parametric IP modeling method based on Gaussian Process Regression (GPR). The IP is treated as a smooth function with a flexible covariance structure instead of being constrained by a predefined analytic form. GPR learns both the global structure and small-scale features of the line shape directly from the data. Since the IP varies slowly across the detector, the method divides each spectral order into several consecutive spatial segments. Each segment is fitted independently, capturing local variations. The model includes measurement uncertainties and provides a probabilistic description of the IP. Adjacent segments are linked with smooth interpolation to ensure a continuous IP across the entire order. Model performance is evaluated using reduced chi-squared and root mean square error (RMSE), allowing quantitative assessment and comparison with traditional approaches.<br> <b>Results:</b> The method is tested with laser frequency comb (LFC) exposures from the fiber-fed High Resolution Spectrograph (HRS) on the 2.16 m telescope at Xinglong Observatory. The LFC produces a dense and highly stable set of emission lines and is well suited for validating IP reconstruction. Three experiments show clear and consistent improvements. Using odd-numbered lines to predict evennumbered ones within a single exposure reduces the RMSE by 35.6% compared with a Gaussian model, showing better determination of line centers. Applying an IP model trained on one exposure to a later exposure reduces the RMSE by 42.5%, demonstrating improved stability when the model is transferred between exposures. A comparison between two channels in the same exposure shows a 37.1% improvement in calibration consistency, indicating reduced channel-tochannel systematics.<br> <b>Conclusions:</b> The results show that GPR provides a more accurate description of the instrumental profile and its spatial variation than traditional parametric models. The improved reconstruction of the IP leads to more accurate line center measurements and a more stable and precise wavelength solution. This capability is important for pushing the RV precision of high-resolution spectrographs toward the centimeter-per-second level. GPR offers a promising approach for modeling instrumental profiles and supports the precision required for detecting Earth-like exoplanets.
- Research Article
- 10.9734/ajpas/2025/v27i12839
- Dec 22, 2025
- Asian Journal of Probability and Statistics
- Thomas Adidaumbe Ugbe
This study uses a simulated macroeconomic dataset with 100 observations to compare Ordinary Least Squares (OLS) regression versus Bayesian regression for GDP modelling. Investment, consumption, and government spending are included in the model definition as important explanatory variables. Under stringent distributional assumptions, OLS, which is based on the traditional frequentist paradigm, yields parameter estimates that are solely obtained from the observed sample. In contrast, Bayesian regression produces posterior estimates by integrating prior distributions with the likelihood, providing a probabilistic description of parameter uncertainty. The methodological significance of prior specification was highlighted by the unstable inferences obtained from initial Bayesian estimation using weakly informative priors. However, posterior convergence and predictive alignment with OLS findings were significantly enhanced by the addition of sophisticated, commercially viable priors. While Bayesian regression provided wider credible intervals reflecting uncertainty, OLS produced more accurate (narrower) predicted intervals. The results confirm that Bayesian regression is a rigorous and reliable substitute for OLS when backed by well-informed priors, especially in situations with sparse data or ambiguous model assumptions.
- Research Article
- 10.1007/s10955-025-03527-5
- Oct 22, 2025
- Journal of Statistical Physics
- Fausto Colantoni + 2 more
Abstract In this paper, we study reflecting Brownian motion with Poissonian resetting. After providing a probabilistic description of the phenomenon using jump diffusions and semigroups, we analyze the time-reversed process starting from the stationary measure. We prove that the time-reversed process is a Brownian motion with a negative drift and non-local boundary conditions at zero. Moreover, we further study the time-reversed process between two consecutive resetting points and show that, within this time window, it behaves as the same reflecting Brownian motion with a negative drift, where both the jump sizes and the time spent at zero coincide with those of the process obtained under the stationary measure. We characterize the dynamics of both processes, their local times, and finally investigate elliptic problems on positive half-spaces, showing that the two processes leave the same traces at the boundary.
- Research Article
- 10.1037/pha0000787
- Oct 1, 2025
- Experimental and clinical psychopharmacology
- Derek D Reed + 3 more
The operant demand cigarette purchase task conventionally prompts participants to report imagined purchases in units of single cigarettes. Although this purchasing modality diverges from the units deployed in real purchasing scenarios (i.e., packs of 20), no research has examined how simulated purchasing of the more ecologically valid unit of cigarette packs maps onto single cigarette purchasing metrics. A sample of 212 participants in this study reported hypothetical cigarette purchases across three iterations of the cigarette purchase task. Two of these collected responses in a binary format-would or would not purchase-at yoked unit prices, where tasks were distinguished as purchases of single cigarettes or packs of cigarettes. This binary framework permitted a simplified probabilistic description of purchasing that maintained a consistent timeframe. Participants also completed the standard cigarette purchase task, reporting the quantity of single cigarettes they would purchase and consume at each unit price. Purchasing breakpoint, or the highest price at which participants reported purchasing cigarettes, was broadly consistent across these tasks, weakly so when comparing purchases made on the binary tasks (r = .183). Tests of equivalence suggested that there were meaningful differences between breakpoint values reported on the single cigarette binary task and the per pack binary task, t(211) = -5.85, p < .001, and between breakpoint values reported on the standard purchase task and the per pack binary task, t(211) = 11.49, p < .001. Results suggest more research is needed to determine what environmental factors or imposed constraints are practically influencing reported cigarette valuation. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
- Research Article
1
- 10.1016/j.ijfatigue.2025.108953
- Sep 1, 2025
- International Journal of Fatigue
- Joona Vaara + 7 more
Probabilistic description of the cyclic R-curve based on microstructural barriers
- Research Article
4
- 10.1007/s10237-025-02005-x
- Aug 27, 2025
- Biomechanics and modeling in mechanobiology
- Denisa Martonová + 3 more
Computational modeling has become an integral tool for understanding the interaction between structural organization and functional behavior in a wide range of biological tissues, including the human myocardium. Traditional constitutive models, and recent models generated by automated model discovery, are often based on the simplifying assumption of perfectly aligned fiber families. However, experimental evidence suggests that many fiber-reinforced tissues exhibit local dispersion, which can significantly influence their mechanical behavior. Here, we integrate the generalized structure tensor approach into automated material model discovery to represent fibers that are distributed with rotational symmetry around three mean orthogonal directions-fiber, sheet, and normal-by using probabilistic descriptions of the orientation. Using biaxial extension and triaxial shear data from human myocardium, we systematically vary the degree of directional dispersion and stress measurement noise to explore the robustness of the discovered models. Our findings reveal that up to a moderate dispersion in the fiber direction and arbitrary dispersion in the sheet and normal directions improve the goodness of fit and enable recovery of a previously proposed four-term model in terms of the isotropic second invariant, two dispersed anisotropic invariants, and one coupling invariant. Our approach demonstrates strong robustness and consistently identifies similar model terms, even in the presence of up to 7% random noise in the stress data. In summary, our study suggests that automated model discovery based on the powerful generalized structure tensors is robust to noise and captures microstructural uncertainty and heterogeneity in a physiologically meaningful way.
- Research Article
- 10.18372/2073-4751.82.20366
- Aug 23, 2025
- Problems of Informatization and Management
- O.Yu Lavrynenko
This paper solves the urgent scientific problem of increasing the probability of recognizing commands and fused speech in radio engineering devices and telecommunications under the influence of distorting factors by developing new recognition models. It is proposed to use hidden Markov processes to conduct a probabilistic description of the one-, three-, and four-phoneme model of speech signal recognition, which makes it possible to theoretically estimate the probability of recognition using each of the models. On the basis of a comparative analysis, the four-phoneme model of speech signal recognition was investigated, which, by improving the three-phoneme model by adding one more state to the model, allows, unlike other models of speech signal recognition, to increase the probability of their recognition. The probability of recognizing speech signals and commands using the four-phoneme method is established, and it is shown that its application in practice with the help of the developed software allows to achieve a probability of 98%. The influence of amplitude and phase distortion of the speech signal on the recognition probability was studied, which showed that the recognition probability decreases when amplitude noise (recognition probability is 81.7%) and phase noise (recognition probability is 92.3%) are introduced into the speech signal. A comparative analysis of the four- and three-phoneme models is carried out, which shows that the recognition probability error of the four-phoneme model is 40% less than that of the three-phoneme model.
- Research Article
1
- 10.1177/10812865251348030
- Aug 20, 2025
- Mathematics and Mechanics of Solids
- Sharana Kumar Shivanand + 2 more
We present a novel framework for the probabilistic modelling of random fourth-order material tensor fields, with a focus on tensors that are physically symmetric and positive definite (SPD), of which the elasticity tensor is a prime example. Given the critical role that spatial symmetries and invariances play in determining material behaviour, it is essential to incorporate these aspects into the probabilistic description and modelling of material properties. In particular, we focus on spatial point symmetries or invariances under rotations, a classical subject in elasticity. Following this, we formulate a stochastic modelling framework using a Lie algebra representation via a memory-less transformation that respects the requirements of positive definiteness and invariance. With this, it is shown how to generate a random ensemble of elasticity tensors that allows an independent control of strength, eigenstrain, and orientation. The procedure also accommodates the requirement to prescribe specific spatial symmetries and invariances for each member of the whole ensemble, while ensuring that the mean or expected value of the ensemble conforms to a potentially ‘higher’ class of spatial invariance. Furthermore, it is important to highlight that the set of SPD tensors forms a differentiable manifold, which geometrically corresponds to an open cone within the ambient space of symmetric tensors. Thus, we explore the mathematical structure of the underlying sample space of such tensors and introduce a new distance measure or metric, called the ‘elasticity metric’ , between the tensors. Finally, we model and visualize a one-dimensional spatial field of orthotropic Kelvin matrices using interpolation based on the elasticity metric.
- Research Article
- 10.1111/ffe.70053
- Aug 5, 2025
- Fatigue & Fracture of Engineering Materials & Structures
- Xingyuan Xu + 2 more
ABSTRACTDigital twin (DT) framework based on dynamic Bayesian network (DBN) has established a novel paradigm for damage prognosis. This study focuses on the multidimensional uncertainty characterization in damage prognosis of individual multiple‐unit trains (IMUTs). The structural feature perception model is developed, integrating a probabilistic load equivalence quantification model based on a generalized durability load spectrum for IMUT, a normalized fatigue crack growth rate model with uncertainty propagation and a probabilistic description model for equivalent initial flaw size. A hybrid uncertainty quantification method is employed to achieve synergistic modeling of deterministic and stochastic parameters. DT experimental platform centered on bogie welded structures is established, implementing a closed‐loop validation mechanism between physical tests and virtual models. Experimental results demonstrate tracking errors ≤ 6.8% for damage parameters ( ) and a 90.5% improvement in prediction accuracy compared to conventional methods.
- Research Article
- 10.1103/6375-8ncz
- Jul 31, 2025
- Physical Review Research
- Federico Gerbino + 3 more
We introduce a solvable model of a measurement-induced phase transition (MIPT) in a deterministic but chaotic dynamical system with a positive Lyapunov exponent. In this setup, an observer only has a probabilistic description of the system but mitigates chaos-induced uncertainty through repeated measurements. Using a minimal representation via a branching tree, we map this problem to the directed polymer (DP) model on the Cayley tree, although in a regime dominated by rare events. By studying the Shannon entropy of the probability distribution estimated by the observer, we demonstrate a phase transition distinguishing a chaotic phase with a reduced Lyapunov exponent from a strong-measurement phase where uncertainty remains bounded. Remarkably, the location of the MIPT transition coincides with the freezing transition of the DP, although the critical properties differ. We provide an exact, universal scaling function describing the entropy growth in the critical regime. Numerical simulations confirm our theoretical predictions, highlighting a simple yet powerful framework to explore measurement-induced transitions in classical chaotic systems.
- Research Article
- 10.36001/ijphm.2025.v16i2.4262
- Jul 26, 2025
- International Journal of Prognostics and Health Management
- Dersin Pierre + 3 more
Deep learning has demonstrated significant potential for prognostics in complex systems (Fink et al., 2020). Recent advances in physics-informed machine learning have integrated physics-of-failure principles within data-driven models (AriasChao, Kulkarni, Goebel, & Fink, 2022). Beyond physical laws, fleet-level time-to-failure (TTF) distributions provide valuable prior knowledge for individual asset life predictions.In this paper we derive a probabilistic analytical health index(HI) model based on power-law degradation, enabling a probabilistic description that reconciles individual variability with fleet-wide trends. We show that, under Weibull, Gamma, and Pareto-distributed TTFs, the HI evolution follows an analytical form, allowing explicit characterization of time to reach intermediate degradation levels. Therefore, this work provides a theoretical foundation for integrating reliability principles with deep learning, advancing towards Reliability-Informed Deep Learning (RIDL). The approach is validated on synthetic turbofan engine data and real-world battery degradation datasets. This work establishes a rigorous basis for embedding reliability engineering principles into deep learning, improving predictive maintenance and remaining useful life (RUL) estimation.
- Research Article
1
- 10.3390/w17142145
- Jul 18, 2025
- Water
- Tangsong Luo + 4 more
Reservoir group flood control scheduling decision-making faces multiple uncertainties, such as dynamic fluctuations of evaluation indicators and conflicts in weight assignment. This study proposes a risk analysis model for the decision-making process: capturing the temporal uncertainties of flood control indicators (such as reservoir maximum water level and downstream control section flow) through the Long Short-Term Memory (LSTM) network, constructing a feasible weight space including four scenarios (unique fixed value, uniform distribution, etc.), resolving conflicts among the weight results from four methods (Analytic Hierarchy Process (AHP), Entropy Weight, Criteria Importance Through Intercriteria Correlation (CRITIC), Principal Component Analysis (PCA)) using game theory, defining decision-making risk as the probability that the actual safety level fails to reach the evaluation threshold, and quantifying risks based on the First-Order Second-Moment (FOSM) method. Case verification in the cascade reservoirs of the Qiantang River Basin of China shows that the model provides a risk assessment framework integrating multi-source uncertainties for flood control scheduling decisions through probabilistic description of indicator uncertainties (e.g., Zmax1 with μ = 65.3 and σ = 8.5) and definition of weight feasible regions (99% weight distribution covered by the 3σ criterion), filling the methodological gap in risk quantification during the decision-making process in existing research.
- Research Article
- 10.1103/wjcn-l4ms
- Jul 17, 2025
- Physical review. E
- Camilla Sarra + 6 more
New experimental methods make it possible to measure the expression levels of many genes, simultaneously, in snapshots from thousands or even millions of individual cells. Current approaches to analyze these experiments involve clustering or low-dimensional projections, and often start with the assumption that distinct cell types exist. Here we use the principle of maximum entropy to obtain a probabilistic description that captures the observed presence or absence of mRNAs from hundreds of genes in cells from the mammalian brain. We construct the Ising model compatible with experimental means and pairwise correlations, and validate it by showing that it gives good predictions for higher-order statistics. We find that the probability distribution of cell states has many local maxima. Grouping cells according to these maxima (or energy minima) gives a classification in good agreement with currently assigned cell types. We show that when assignments disagree our model is dividing cell types into subtypes with clearly distinguishable expression patterns. These results make concrete the intuition that types or classes of cells are emergent behaviors.
- Research Article
- 10.47363/jpsos/2025(7)315
- Jun 30, 2025
- Journal of Physics & Optics Sciences
- Romanenko Vladimir Alekseevich
The proposed work presents the derivation of the wave function and the Schrödinger equation based on the dual tangential equation of time with an imaginary rate. The derivation uses de Broglie formulas. The wave and complex-conjugate functions are considered, which participate in the probabilistic description of the position of a particle in a potential field. Another interpretation of the product of these functions is presented. Based on it, the classical structure of the electron is derived. The derivation determines the quantum value of the imaginary incident vector. When deriving wave functions, simple solutions immediately arise for energies corresponding to radiation emitted by portions and a microscopic harmonic oscillator. In quantum mechanics, the first solution was used by Planck as a hypothesis when deriving his famous formula for describing equilibrium thermal radiation. The second energy solution was obtained from the Schrödinger equation by cumbersome calculations.
- Research Article
- 10.1088/1361-6552/adda86
- Jun 3, 2025
- Physics Education
- Suyu Li + 2 more
Abstract Quantum mechanics is a physical theory based on statistical laws and can merely predict probabilities for the measurement outcomes, suggesting that it is an average description. Due to the lack of all the details of microscopic physical objects, only probabilistic predictions can be made for physical phenomena. In this paper, we discuss the root of the probabilistic descriptions in quantum theory by recalling the single photon interference experiments conducted by two commonly used optical interferometric systems: the Mazh–Zehnder interferometer (MZI) and Young’s double-slit system (YDS). Though they are both two-path interference, it is shown that the quantum mechanics can provide deterministic predictions for the measurement outcomes in some special cases by adopting the MZI, while only probabilistic predictions can be provided for the measurement outcomes by adopting the YDS. We hope this work can help to understand the probabilistic nature of quantum mechanics.
- Research Article
- Apr 30, 2025
- ArXiv
- Tuhin Chakrabortty + 1 more
Controlling stochastic temporal networks remains an open challenge in control theory. While predictable temporal networks with known future dynamics enhance controllability, real-world networks often exhibit stochasticity and unpredictability, making control harder. Here, we investigate control mechanisms for stochastic temporal networks by analyzing how biological controllers, such as shepherd dogs, manage panicked flocks of sheep. We studied a century-old shepherding competition, the sheepdog trials, where small groups of sheep unpredictably switch between fleeing and following behaviors-effectively forming stochastic temporal networks. Unlike large, cohesive flocks, these small, indecisive flocks are difficult to control, yet skilled dog-handler teams excel at both herding and splitting them (shedding) on demand. Using a stochastic choice model to describe the sheep's behavioral shifts, we found that trained dogs exploit stochastic indecisiveness, typically seen as an obstacle, as a control tool, enabling both herding and splitting of noisy groups of sheep. Building on these insights, we developed the Indecisive Swarm Algorithm (ISA) for artificial agents and benchmarked its performance against standard approaches, including the Averaging-Based Swarm Algorithm (ASA) and the Leader-Follower Swarm Algorithm (LFSA). ISA minimizes control energy in trajectory-following tasks and outperforms alternatives under noisy conditions. By framing these results within a stochastic temporal network perspective, we demonstrate that even probabilistic knowledge of future dynamics can enhance control efficiency in specific scenarios. These findings establish a framework for managing stochastic temporal networks with applications in noisy, behavior-switching animal collectives, swarm robotics, and opinion dynamics.
- Research Article
- 10.1080/00207179.2025.2492305
- Apr 24, 2025
- International Journal of Control
- Filipe Marques Barbosa + 1 more
Stochastic model predictive control addresses uncertainties by incorporating the probabilistic description of the disturbances into joint chance constraints. Yet, the classic methods for handling this class of constraints are often computationally inefficient and overly conservative. To overcome this, we propose to replace the nonconvex inverse cumulative distribution function of the standard normal distribution in the deterministic counterpart of these constraints with a highly accurate, exponential cone-representable approximation. This allows the constraints to be formulated as exponential cone functions, and the problem is solved as an exponential cone optimization with risk allocation as decision variables. The main advantage of the proposed approach is that the optimization problem is efficiently solved with off-the-shelf software, and with reduced conservativeness. Moreover, it applies to any problem with linear joint chance constraints subject to normally distributed disturbances. We validate our method with numerical examples of stochastic model predictive control applications.
- Research Article
- 10.1007/s43069-025-00452-x
- Apr 17, 2025
- Operations Research Forum
- Petr Volf
This paper deals with a simple probabilistic description of some issues related to searching for a missing person. The questions solved are only a small part of a complex area of the search and rescue problems, but they nevertheless concern at least two interesting points of the application of mathematics in managerial decision-making. First, we propose the use of Bayes’ rule to recalculate the probability of a lost person’s location based on additional information. Next, the optimal ordering of searched areas is determined that minimizes the average (expected) search time. Finally, the proposed solutions will be illustrated with several artificial examples.