Related Topics
Articles published on Bayesian paradigm
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
845 Search results
Sort by Recency
- New
- Research Article
- 10.1109/tcyb.2025.3632756
- Mar 1, 2026
- IEEE transactions on cybernetics
- Lizhang Wang + 2 more
This article addresses the integration of model-driven and data-driven approaches for robust hybrid-driven state estimation under limited data and model uncertainties. An unsupervised hybrid estimation framework, termed adaptive model-driven and data-driven (AMD), is proposed. AMD employs an adaptive cross-coupled prior mechanism within the Bayesian inference paradigm to integrate prior information. A two-stage fusion strategy is introduced: an initial hard fusion of model pseudomeasurements and data-driven priors, followed by an adaptive soft fusion that adjusts model influence based on reconstruction discrepancies, thereby enhancing robustness to imperfect model priors. To capture complex nonlinear transition dynamics, a dynamic bilinear recurrent module has been developed, tailored to the system's underlying behavior. The AMD framework adopts a nonidentical training-testing strategy and an unsupervised hybrid learning objective inspired by the information bottleneck principle, enabling accurate parameter learning without access to ground-truth states. Extensive experiments on multiple nonlinear chaotic systems have demonstrated that AMD consistently achieves competitive or superior estimation accuracy compared to state-of-the-art model-based and hybrid approaches, particularly under underdetermined estimation, model mismatch, and dynamic disturbances. These results demonstrate AMD's capability to effectively leverage limited information through complementary fusion, thereby enhancing both data representation and model robustness. This adaptability positions AMD as a powerful solution for challenging state estimation problems.
- New
- Research Article
- 10.1080/10589759.2026.2633577
- Feb 21, 2026
- Nondestructive Testing and Evaluation
- Yu Wang + 3 more
ABSTRACT Deep learning-based methods for Remaining Useful Life (RUL) prediction generally suffer from insufficient predictive dynamic stability and generalisation capability under real-world industrial conditions. To address this, this paper proposes the Degradation Manifold Dynamic Consistency Network (DMDCN). We initially define a differentiable embedded degradation manifold as the feature representation space. Addressing the limitation that existing embedding methods do not guarantee temporal evolution consistency, a dynamic consistency learning framework is devised to reframe RUL assessment as a joint inference problem of state estimation and dynamics estimation. Through the introduction of the manifold’s local geometric derivatives as the input domain for the dynamics estimator, dynamic consistency between the system’s state representation and its intrinsic evolutionary trend is achieved. Subsequently, this deterministic framework is extended to an approximate Bayesian paradigm, enabling uncertainty quantification via variational inference and Kernel Density Estimation. Empirical results on the public C-MAPSS dataset and a real-world industrial slurry pump dataset indicate that DMDCN improves prediction accuracy and dynamic stability compared to baseline models, validating the method’s potential for applications in high-reliability industrial scenarios.
- Research Article
- 10.1080/10618600.2026.2627454
- Feb 10, 2026
- Journal of Computational and Graphical Statistics
- Juan Sosa + 1 more
Bayesian sociality models provide a scalable and flexible alternative for network analysis, capturing degree heterogeneity through actor-specific parameters while mitigating the identifiability challenges of latent space models. This paper develops a comprehensive Bayesian inference framework, leveraging Markov chain Monte Carlo and variational inference to assess their efficiency-accuracy trade-offs. Through empirical and simulation studies, we demonstrate the model’s robustness in goodness-of-fit, predictive performance, clustering, and other key network analysis tasks. The Bayesian paradigm further enhances uncertainty quantification and interpretability, positioning sociality models as a powerful and generalizable tool for modern network science.
- Research Article
- 10.1002/qre.70164
- Jan 22, 2026
- Quality and Reliability Engineering International
- Hanan Haj Ahmad + 1 more
ABSTRACT This paper develops a comprehensive reliability framework for multicomponent stress–strength systems, where both strength and stress variables follow the exponentiated half‐logistic (EHL) distribution. Reliability analysis is performed under progressive first‐failure censoring, a flexible and practically relevant censoring scheme for modern life testing. A closed‐form expression is derived for the multicomponent reliability function , representing the probability that at least out of strength units bear a common stress. Parameter estimation is addressed within both frequentist and Bayesian paradigms. MLEs with asymptotic and bootstrap confidence intervals are derived, while Bayesian inference is carried out under the GELF using Lindley approximation and the Markov chain Monte Carlo (MCMC) techniques, with the corresponding CrIs. The efficiency and robustness of the proposed procedures are examined through extensive Monte Carlo simulations under various censoring schemes. Two real datasets, concerning software reliability and carbon‐fibers strength, are analyzed to demonstrate the practical relevance of the model. The results establish the EHL distribution as a flexible and effective tool for modeling reliability in engineering and industrial systems, thereby extending methodological and applied insights in stochastic reliability analysis.
- Research Article
- 10.3390/axioms15010056
- Jan 13, 2026
- Axioms
- Ahmed Elshahhat + 1 more
This paper introduces a novel extension of the classical Lindley distribution, termed the X-Lindley model, obtained by a specific mixture of exponential and Lindley distributions, thereby substantially enriching the distributional flexibility. To enhance its inferential scope, a comprehensive reliability analysis is developed under a generalized progressive hybrid censoring scheme, which unifies and extends several traditional censoring mechanisms and allows practitioners to accommodate stringent experimental and cost constraints commonly encountered in reliability and life-testing studies. Within this unified censoring framework, likelihood-based estimation procedures for the model parameters and key reliability characteristics are derived. Fisher information is obtained, enabling the establishment of asymptotic properties of the frequentist estimators, including consistency and normality. A Bayesian inferential paradigm using Markov chain Monte Carlo techniques is proposed by assigning a conjugate gamma prior to the model parameter under the squared error loss, yielding point estimates, highest posterior density credible intervals, and posterior reliability summaries with enhanced interpretability. Extensive Monte Carlo simulations, conducted under a broad range of censoring configurations and assessed using four precision-based performance criteria, demonstrate the stability and efficiency of the proposed estimators. The results reveal low bias, reduced mean squared error, and shorter interval lengths for the XLindley parameter estimates, while maintaining accurate coverage probabilities. The practical relevance of the proposed methodology is further illustrated through two real-life data applications from engineering and physical sciences, where the XLindley model provides a markedly improved fit and more realistic reliability assessment. By integrating an innovative lifetime model with a highly flexible censoring strategy and a dual frequentist–Bayesian inferential framework, this study offers a substantive contribution to modern survival theory.
- Research Article
- 10.3390/sym18010081
- Jan 3, 2026
- Symmetry
- Jianbo Huang + 4 more
Chronic kidney disease (CKD) impacts more than 850 million people globally, yet existing machine learning methodologies for risk stratification encounter substantial challenges: computationally intensive hyperparameter tuning, model opacity that conflicts with clinical interpretability standards, and class imbalance leading to systematic prediction bias. We constructed an integrated architecture that combines XGBoost with Optuna-driven Bayesian optimization, evaluated against 19 competing hyperparameter tuning approaches and tested on CKD patients using dual-paradigm statistical validation. The architecture delivered 93.43% accuracy, 93.13% F1-score, and 97.59% ROC-AUC—representing gains of 6.22 percentage points beyond conventional XGBoost and 7.0–26.8 percentage points compared to 20 baseline algorithms. Tree-structured Parzen Estimator optimization necessitated merely 50 trials compared to 540 for grid search and 1069 for FLAML, whereas Boruta feature selection accomplished 54.2% dimensionality reduction with no performance compromise. Over 30 independent replications, the model exhibited remarkable stability (cross-validation standard deviation: 0.0121, generalization gap: −1.13%) alongside convergent evidence between frequentist and Bayesian paradigms (all p < 0.001, mean CI-credible interval divergence < 0.001, effect sizes d = 0.665–5.433). Four separate explainability techniques (SHAP, LIME, accumulated local effects, Eli5) consistently identified CKD stage and albumin-creatinine ratio as principal predictors, aligning with KDIGO clinical guidelines. Clinical utility evaluation demonstrated 98.4% positive case detection at 50% screening threshold alongside near-optimal calibration (mean absolute error: 0.138), while structural equation modeling revealed hyperuricemia (β = −3.19, p < 0.01) as the most potent modifiable risk factor. This dual-validated architecture demonstrates that streamlined hyperparameter optimization combined with convergent multi-method interpretability enables precise CKD risk stratification with clinical guideline alignment, supporting evidence-informed screening protocols.
- Research Article
- 10.23967/j.rimni.2025.10.69839
- Jan 1, 2026
- Revista Internacional de Métodos Numéricos para Cálculo y Diseño en Ingeniería
- M Nassar + 2 more
This work presents a novel and comprehensive inferential framework for analyzing the stress-strength reliability parameter,R= P(Y X), where X and Y denote independent stress and strength variables, respectively, both modeled as Weibull-distributed with a shared shape parameter but distinct scale parameters. A key innovation of this study lies in its integration of the unified Type-I progressively hybrid censoring scheme, which simultaneously accommodates time constraints and partial failure information, conditions often encountered in real-world reliability testing. To estimate R, we propose and evaluate four distinct inferential strategies: two frequentist (maximum likelihood estimation and maximum spacings estimation) and two Bayesian, each tailored to either the likelihood or spacings-based posterior formulation. The Bayesian methods employ Monte Carlo sampling to compute both Bayes point estimates and credible intervals under informative priors, offering robustness in small-sample or heavily censored contexts. An extensive simulation study is conducted to systematically compare the estimators in terms of bias, efficiency, and interval coverage. To validate the practical applicability of our framework, we further analyze two real-world microdroplet datasets, revealing critical insights into stress-tolerance behavior under experimental constraints. This study not only advances methodological tools for reliability inference under hybrid censoring but also establishes a blueprint for combining classical and Bayesian paradigms in stress-strength modeling.OPEN ACCESS Received: 01/07/2025 Accepted: 02/09/2025 Published: 23/01/2026
- Research Article
- 10.31185/bsj.vol20.iss33.1413
- Dec 20, 2025
- مجلة العلوم الأساسـية
- Mohammad Shakir Zghyr
The quantification of uncertainty in scientific modeling is fundamentally divided between the classical (frequentist) and Bayesian paradigms, compelling practitioners to adopt an either/or approach that often discards valuable information. This paper introduces a novel, unified mathematical framework that synthesizes the inferential outputs of both paradigms. Leveraging the likelihood function as a common foundation, the framework employs a linear pooling operator to combine the classical confidence distribution and the Bayesian posterior distribution into a single, more comprehensive representation of uncertainty, the primary output is a "Unified Uncertainty Interval" (UUI), which inherits both the long-run frequency guarantees of confidence intervals and the intuitive, belief-based interpretation of credible intervals. Case studies involving binomial proportion estimation, particularly under conditions of prior-data conflict, demonstrate that the UUI provides a robust and balanced measure of uncertainty, the framework offers a pragmatic solution to bridge the gap between classical and Bayesian approaches, providing a richer, more nuanced tool for decision-making under uncertainty and moving beyond paradigmatic dogmatism towards a more holistic inferential practice.
- Research Article
- 10.64898/2025.12.19.25342631
- Dec 20, 2025
- medRxiv : the preprint server for health sciences
- Runye Shi + 24 more
Understanding the heterogeneous nature of genetic effects is critical for advancing our knowledge of the genetic architecture of complex traits and developing personalized management strategies. However, existing methods often rely on pre-specified modifying variables to model this heterogeneity, limiting their ability to capture effects driven by complex or unobserved factors. Here, we propose MOCHA (Multi-Omics Clustering for Heterogeneous Association), a novel Bayesian analytical paradigm that identifies latent population subgroups with distinct genetic effects directly from multi-omics data, without requiring a priori variable specification. Extensive simulations confirm that MOCHA accurately identifies the underlying clustering structure, demonstrates superior performance in identifying and ranking features with cluster-specific effects, and provides reliable parameter estimates. Applying MOCHA to genomic and transcriptomic data from the IMAGEN study, we identified two distinct neurodevelopmental clusters associated with adolescent inhibitory control. Post-hoc characterization of these clusters provided novel insights into the mechanisms of brain plasticity, demonstrating the method's practical utility and interpretability.
- Research Article
- 10.31185/bsj.vol20.iss33.1401
- Dec 14, 2025
- مجلة العلوم الأساسـية
- Mohammad Shakir Zghyr
The quantification of uncertainty in scientific modeling is fundamentally divided between the classical (frequentist) and Bayesian paradigms, compelling practitioners to adopt an either/or approach that often discards valuable information. This paper introduces a novel, unified mathematical framework that synthesizes the inferential outputs of both paradigms. Leveraging the likelihood function as a common foundation, the framework employs a linear pooling operator to combine the classical confidence distribution and the Bayesian posterior distribution into a single, more comprehensive representation of uncertainty, the primary output is a "Unified Uncertainty Interval" (UUI), which inherits both the long-run frequency guarantees of confidence intervals and the intuitive, belief-based interpretation of credible intervals. Case studies involving binomial proportion estimation, particularly under conditions of prior-data conflict, demonstrate that the UUI provides a robust and balanced measure of uncertainty, the framework offers a pragmatic solution to bridge the gap between classical and Bayesian approaches, providing a richer, more nuanced tool for decision-making under uncertainty and moving beyond paradigmatic dogmatism towards a more holistic inferential practice
- Research Article
- 10.1002/bimj.70085
- Oct 27, 2025
- Biometrical journal. Biometrische Zeitschrift
- Shunichiro Orihara + 2 more
In observational studies, the propensity score plays a central role in estimating causal effects of interest. The inverse probability weighting (IPW) estimator is commonly used for this purpose. However, if the propensity score model is misspecified, the IPW estimator may produce biased estimates of causal effects. Previous studies have proposed some robust propensity score estimation procedures. However, these methods require considering parameters that dominate the uncertainty of sampling and treatment allocation. This study proposes a novel Bayesian estimating procedure that necessitates probabilistically deciding the parameter, rather than deterministically. Since the IPW estimator and propensity score estimator can be derived as solutions to certain loss functions, the general Bayesian paradigm, which does not require considering the full likelihood, can be applied. Therefore, our proposed method only requires the same level of assumptions as ordinary causal inference contexts. The proposed Bayesian method demonstrates equal or superior results compared to some previous methods in simulation experiments and is also applied to real data, namely the Whitehalldataset.
- Research Article
- 10.1002/dta.3961
- Oct 25, 2025
- Drug Testing and Analysis
- Taisuke Kuroda + 9 more
ABSTRACTFlunixin meglumine is widely used to manage pain and inflammation in horses, and its regulation requires robust pharmacokinetic analysis for harmonization. In this study, we conducted a meta‐analysis of flunixin disposition using plasma and urine concentration data from 65 horses across four countries to robustly estimate pharmacokinetic parameters in setting screening limits (SLs) for controlling medications in horses. A population (POP) model was developed using nonlinear mixed‐effects model analysis. The irrelevant plasma concentration (IPC) and irrelevant urine concentration (IUC) were determined to be 1.9 and 70.2 ng/mL, respectively, with a typical urine‐to‐plasma ratio (Rss) of 35.9. Using the current International Federation of Horseracing Authorities (IFHA) screening limits (ISLs) (1 ng/mL for plasma; 100 ng/mL for urine), a longer detection time (DT) was observed for plasma than for urine, especially after multiple doses, as plasma ISL corresponds to a slower terminal elimination phase. Increasing the current plasma ISL from 1 to 3 ng/mL—while keeping the current urine ISL at 100 ng/mL—could better align the plasma and urine DTs. As a limitation of this study, both Standardbred and Thoroughbred data were included, and further data collection is needed to fully ascertain potential breed‐specific effects. Moreover, this POP model also enabled relatively accurate Bayesian estimation of individual withdrawal times (WTs) from limited data. Clinicians could apply this Bayesian approach to making informed WT recommendations for horses when sufficient data is available. While existing non‐POP statistical models remain viable, they may require a more conservative approach to WT estimation than Bayesian methods.
- Research Article
- 10.3390/axioms14100769
- Oct 17, 2025
- Axioms
- Ela Verma + 3 more
The analysis of lifetime data under censoring schemes plays a vital role in reliability studies and survival analysis, where complete information is often difficult to obtain. This work focuses on the estimation of the parameters of the recently proposed generalized Kavya–Manoharan exponential (GKME) distribution under progressive Type-I interval censoring, a censoring scheme that frequently arises in medical and industrial life-testing experiments. Estimation procedures are developed under both classical and Bayesian paradigms, providing a comprehensive framework for inference. In the Bayesian setting, parameter estimation is carried out using Markov Chain Monte Carlo (MCMC) techniques under two distinct loss functions: the squared error loss function (SELF) and the general entropy loss function (GELF). For interval estimation, asymptotic confidence intervals as well as highest posterior density (HPD) credible intervals are constructed. The performance of the proposed estimators is systematically evaluated through a Monte Carlo simulation study in terms of mean squared error (MSE) and the average lengths of the interval estimates. The practical usefulness of the developed methodology is further demonstrated through the analysis of a real dataset on survival times of guinea pigs exposed to virulent tubercle bacilli. The findings indicate that the proposed methods provide flexible and efficient tools for analyzing progressively interval-censored lifetime data.
- Research Article
- 10.5815/ijisa.2025.05.02
- Oct 8, 2025
- International Journal of Intelligent Systems and Applications
- Rasaki Olawale Olanrewaju + 1 more
In this article, novel mixture of conditional volatility models of Generalized Autoregressive Conditional Heteroscedasticity (GARCH); Exponential GARCH (EGARCH); Glosten, Jagannathan, and Runkle GARCH (GJR-GARCH); and dependent variable-GARCH (TGARCH) were thoroughly expounded in a Bayesian paradigm. Expectation-Maximization (EM) algorithm was employed as the parameter estimation technique to work-out posterior distributions of the involved hyper-parameters after setting-up their corresponding prior distributions. Mode was considered as the stable location parameter instead of the mean, because it could robustly adapt to symmetric, skewedness, heteroscedasticity and multimodality effects simulteanously needed to redefine switching conditional variance processes conceived as mixture components based on shifting number of modes in the marginal density of Skewed Generalized Error Distribution (SGED) set as the prior random noise. In application to ten (10) most used cryptocurrency coins and tokens via their daily open, high, low, close and volume converted and transacted in USD from the same date of inception. Binance Coin (BNB) via its daily lower units transacted in USD (that is, low-BNB), yielded the most reduced Deviance Information Criteria (DIC) of 3651.1935. The low-BNB process yielded a two-regime process of TGARCH, that is, Mixture dependent variable-GARCH (TGARCH (2: 2, 2)) with stable probabilities of 33% and 66% respectively. The first regime was attributed with low unconditional volatility of 16.96664, while the second regime was traded with high unconditional volatility of 585.6190. In summary, Binance Coin (BNB) was a mixture of tranquil market conditions and stormy market conditions. Implicatively, this implies that the first regime of the low-BNB was characterized with strong fluctuating reaction to past negative daily returns of low-BNB converted to USD, while the second regime was attributed with weak fluctuating reaction. Additionally, the first regime was attributed with low repetitive volatility process, while the second regime was characterized with high persistence fluctuating process. For financial and economic decision-making, crypocurrency users and financial bodies should look-out for financial and economic sabotage agents, like war, exchange rate instability, political crises, inflation, browsing network fluctuation etc. that arose, declined or fluctuated doing the ten (10) years to study of the coins and tokens to ascertain which of this/these agent(s) contributed to the volatility process. Mixture models from a Bayesian perspective were of interest because; some of the classical (traditional) models cannot accommodate and absolve regime-switching traits, and as well contain prior information known about cryptocurrency coins and tokens. In light of model performance, DIC values were compared on the basis of most performed to less perform via lower to higher values of DICs.
- Research Article
- 10.1088/2058-9565/ae08e1
- Oct 7, 2025
- Quantum Science and Technology
- Julia Boeyens + 5 more
Abstract Global quantum sensing enables parameter estimation across arbitrary ranges with a finite number of measurements. Among the various existing formulations, the Bayesian paradigm stands as a flexible approach for optimal protocol design under minimal assumptions. Within this paradigm, however, there are two fundamentally different ways to capture prior ignorance and uninformed estimation; namely, requiring invariance of the prior distribution under specific parameter transformations, or adhering to the geometry of a state space. In this paper we carefully examine the practical consequences of both the invariance-based and the geometry-based approaches, and show how to apply them in relevant examples of rate and coherence estimation in noisy settings. We find that, while the invariance-based approach often leads to simpler priors and estimators and is more broadly applicable in adaptive scenarios, the geometry-based one can lead to faster posterior convergence in a well-defined measurement setting. Crucially, by employing the notion of location-isomorphic parameters, we are able to unify the two formulations into a single practical and versatile framework for optimal global quantum sensing, detailing when and how each set of assumptions should be employed to tackle any given estimation task. We thus provide a blueprint for the design of novel high-precision quantum sensors.
- Research Article
- 10.1186/s12874-025-02664-5
- Oct 2, 2025
- BMC Medical Research Methodology
- Minghong Yao + 6 more
BackgroundStandard random-effects meta-analysis models for rare events exhibit significant limitations, particularly when synthesizing studies with double-zero events. While methodological advances in both frequentist and Bayesian frameworks now offer robust alternatives that bypass continuity corrections, the comparative performance of these approaches—especially between Bayesian and frequentist paradigms—remains understudied.MethodsThis study evaluates the performance of ten widely used meta-analysis models for binary outcomes, using the odds ratio as the effect measure. The evaluated models comprise seven frequentist and three Bayesian approaches. Simulations systematically varied key parameters, including control event rates, treatment effects, study numbers, and heterogeneity levels, to compare model performance across four metrics: percentage bias, 95% confidence/credible interval width, root mean square error, and coverage. The methods were further illustrated through applications to two published rare events meta-analyses.ResultsThe results show that the beta-binomial model proposed by Kuss generally performed well, while the generalised estimating equations did not. In cases where heterogeneity is not large, all models tended to have a good performance except for the generalised estimating equations. When the heterogeneity is large, none of the compared models produced good performance. The Bayesian model incorporating the Beta-Hyperprior proposed by Hong et al. performed well, followed by the binomial-normal hierarchical model proposed by Bhaumik.ConclusionsIn summary, the beta-binomial model proposed by Kuss is recommended for rare events meta-analyses, and the Bayesian model is a promising method for pooling rare events data.Supplementary InformationThe online version contains supplementary material available at 10.1186/s12874-025-02664-5.
- Research Article
- 10.1093/pnasnexus/pgaf275
- Aug 29, 2025
- PNAS Nexus
- Arthur Prat-Carrabin + 1 more
A classic result of psychophysics is that human perceptual estimates are more variable for larger magnitudes. This “Weber behavior,” however, has typically not been the focus of the prominent Bayesian paradigm. Here, we examine the variability of a Bayesian observer in comparison with human subjects. In two preregistered experiments, we manipulate the prior distribution and the reward function in a numerosity-estimation task. When large numerosities are more frequent or more rewarding, the Bayesian observer exhibits an “anti-Weber behavior,” in which larger magnitudes yield less variable responses. Human subjects exhibit a similar pattern, thus breaking a long-standing result of psychophysics. Nevertheless, subjects’ responses are best reproduced by a logarithmic encoding of magnitudes, a proposal of Fechner often regarded as accounting for Weber behavior. We thus obtain an anti-Weber behavior together with a Fechner encoding. Our results suggest that the increasing variability may be primarily due to the skewness of natural priors.
- Research Article
1
- 10.1080/00949655.2025.2550355
- Aug 27, 2025
- Journal of Statistical Computation and Simulation
- Heba S Mohammed + 2 more
A novel unit-Weibull (UW) distribution as an alternative to the beta and Kumaraswamy distributions with decreasing, increasing, unimodal, and anti-unimodal density shapes has been introduced. When an experimenter wants to eliminate certain test unit groups before the first failure is noticed in all groups, progressive first-failure censoring has been extensively utilized in practice. Assuming that the failure times of experimental units follow the UW distribution, this paper deals with different Bayesian and non-Bayesian estimation issues of model parameters and different reliability indices of the UW model in the presence of data gathered from the proposed censoring technique. Besides the common likelihood function (LF) method, the product of spacing (PS) method is also utilized to carry out the frequentist point and interval estimators. In the Bayesian paradigm, by leveraging the Markov chain Monte Carlo technique as well as the squared error loss, we examined the PS function as an alternative to the usual LF, and both are addressed under the Bayesian setup for all unknown parameters. Using Fisher information gathered from the proposed LF and PS approaches, two different asymptotic interval estimators are created. Additionally, using the acquired Markov chains simulated from LF-based and PS-based, two different Bayes credibility are also acquired. The problem of specifying the optimal progressive censoring is also addressed via four optimum metrics. Simulation experiments are conducted to assess the feasibility of the estimation approaches proposed, taking into account varying group sizes and progressive plans. To show the suggested model's relevance and viability in a real-world setting, two sets of data from the chemical and physical sectors, one founded on vinyl chloride and the other on turbochargers, are examined.
- Research Article
- 10.1101/2025.06.12.659382
- Aug 27, 2025
- bioRxiv
- Ayush Saurabh + 4 more
Förster resonance energy transfer (FRET) is a widely used tool to probe nanometer scale dynamics, projecting rich 3D biomolecular motion onto noisy 1D traces. However, interpretation of FRET traces remains challenging due to degeneracy—distinct structural states map to similar FRET efficiencies— and often suffers from under- and/or over-fitting due to the need to predefine the number of FRET states and noise characteristics. Here we provide a new software, Bayesian nonparametric FRET (BNP-FRET) for binned data obtained from integrative detectors, that eliminates user-dependent parameters and accurately incorporates all known noise sources, enabling the identification of distinct configurations from 1D traces in a plug-n-play manner. Using simulated and experimental data, we demonstrate that BNP-FRET eliminates logistical barrier of predetermining states for each FRET trace and permits high-throughput, simultaneous analysis of a large number of kinetically heterogeneous traces. Furthermore, working in the Bayesian paradigm, BNP-FRET naturally provides uncertainty estimates for all model parameters including the number of states, kinetic rates, and FRET efficiencies.
- Research Article
1
- 10.1016/j.carrev.2025.08.019
- Aug 1, 2025
- Cardiovascular revascularization medicine : including molecular interventions
- Abhishek Chaturvedi + 13 more
Safety of pre-procedure fasting versus non-fasting protocols before cardiac catheterization - a Bayesian meta-analysis of randomized clinical trials.