Articles published on Markov chain monte carlo algorithm
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
3072 Search results
Sort by Recency
- New
- Research Article
- 10.1111/bmsp.70028
- Jan 20, 2026
- The British journal of mathematical and statistical psychology
- Chen-Wei Liu
Hidden Markov diagnostic classification models capture how students' cognitive attributes evolve over time. This paper introduces a Bayesian Markov chain Monte Carlo algorithm for diagnostic classification models that jointly estimates time-varying Q matrices, latent attributes, item parameters, attribute class proportions and transition matrices across multiple occasions. Using the R package hmdcm developed for this study, Monte Carlo simulations demonstrate accurate parameter recovery, and an empirical probability-concept assessment confirmed the algorithm's ability to trace attribute trajectories, supporting its value for longitudinal diagnostic classification in both research and instructional practice.
- New
- Research Article
- 10.1002/cjs.70040
- Jan 15, 2026
- Canadian Journal of Statistics
- Sonia Alouini + 1 more
Abstract The asymptotic dependence structure between multivariate extreme values is fully characterized by their projections on the unit simplex. Under mild conditions, the only constraint on the resulting distributions is that their marginal means must be equal, which results in a nonparametric model that can be difficult to use in applications. Mixtures of Dirichlet distributions have been proposed for use as a semiparametric model, but fitting them is awkward. In this article, we propose a new approach to the use of Dirichlet mixtures, based on tilting, to ensure that the moment conditions are satisfied. We show that these tilted mixtures are dense in the full nonparametric family, are well defined in all dimensions, and allow the probabilistic clustering of extreme events. In order to fit them, we use a fast Markov chain Monte Carlo algorithm that does not require fine‐tuning. Its performance is assessed using simulations and an application to financial data.
- New
- Research Article
- 10.1016/j.ijrmms.2025.106327
- Jan 1, 2026
- International Journal of Rock Mechanics and Mining Sciences
- Zhenting Sun + 8 more
Three-dimensional discrete fracture network identification based on deep learning and reversible jump Markov chain Monte Carlo algorithm
- New
- Research Article
- 10.1080/02664763.2025.2606972
- Dec 31, 2025
- Journal of Applied Statistics
- Djidenou Montcho + 3 more
From a practical perspective, proposals are one of the main bottleneck for any Markov chain Monte Carlo (MCMC) algorithm. This paper suggests a novel data driven proposal for reversible jump MCMC for Bayesian variable selection in the context of predictive risk assessment for schizophrenia based on imaging genetic data. Given functional Magnetic Resonance Image and Single Nucleotide Polymorphisms information of healthy and people diagnosed with schizophrenia, we use a Bayesian probit model to select discriminating variables for inferential purposes, while to estimate the predictive risk, the most promising models are combined using a Bayesian model averaging scheme.
- New
- Research Article
- 10.1080/10618600.2025.2572327
- Dec 29, 2025
- Journal of Computational and Graphical Statistics
- Santiago Marin + 2 more
Modern approaches to perform Bayesian variable selection rely mostly on the use of shrinkage priors. That said, an ideal shrinkage prior should be adaptive to different signal levels, ensuring that small effects are ruled out, while keeping relatively intact the important ones. With this task in mind, we develop the nonparametric Bayesian Lasso, an adaptive and flexible shrinkage prior for Bayesian regression and variable selection, particularly useful when the number of predictors is comparable or larger than the number of available data points. We build on spike-and-slab Lasso ideas and extend them by placing a Dirichlet process prior on the shrinkage parameters. The result is a prior on the regression coefficients that can be seen as an infinite mixture of Laplace distributions, all offering different amounts of regularization, ensuring a more adaptive and flexible shrinkage. We also develop an efficient Markov chain Monte Carlo algorithm for posterior inference. Through extensive simulation studies and real-world data analyses, we illustrate that our proposed method leads to coefficient recovery, variable selection accuracy, and out-of-sample predictions that are comparable to or better than those from state-of-the-art shrinkage priors, highlighting the benefits of the nonparametric Bayesian Lasso over existing methods. Supplementary materials for this article are available online.
- New
- Research Article
- 10.1190/geo-2025-0227
- Dec 25, 2025
- Geophysics
- Bo Li + 4 more
Abstract The Q value quantitatively characterizes the extent of seismic wave absorption in rocks, aids in the compensation of seismic data, and supports the detection of oil and gas properties. The conventional Q-value extraction method is typically suitable for vertical seismic profile (VSP). However, when applied to reflection seismic data, the accuracy of interval-Q extraction is relatively low. At the same time, the Q value extraction accuracy is severely affected by the interference of reflected seismic waves, which makes it difficult to accurately separate the tuning waves. Therefore, we propose a Q-value inversion method that integrates well logs and seismic data. It utilizes logging data to synthesize unattenuated seismic data and formulates an objective function between actual attenuated and non-attenuated data using nonstationary equations. Due to the objective function's strong nonlinearity, conventional inversion methods fail to address it; thus, the Markov chain Monte Carlo (MCMC) algorithm is employed for resolution. Upon acquiring the Q value at the well location, the study develops an interpolation objective function by incorporating dip angle constraints, ultimately deriving the complete Q-value dataset through inversion. In addition, we validated the effectiveness of the method using two complex models and successfully applied it to field seismic data.
- Research Article
- 10.1017/pan.2025.10029
- Dec 22, 2025
- Political Analysis
- Licheng Liu + 1 more
Abstract Despite the recent methodological advancements in causal panel data analysis, concerns remain about unobserved unit-specific time-varying confounders that cannot be addressed by unit or time fixed effects or their interactions. We develop a Bayesian sensitivity analysis (BSA) method to address the concern. Our proposed method is built upon a general framework combining Rubin’s Bayesian framework for model-based causal inference (Rubin [1978], The Annals of Statistics 6(1), 34–58) with parametric BSA (McCandless, Gustafson, and Levy [2007], Statistics in Medicine 26(11), 2331–2347). We assess the sensitivity of the causal effect estimate from a linear factor model to the possible existence of unobserved unit-specific time-varying confounding, using the coefficients of the treatment variable and observed confounders in the model for the unobserved confounding as sensitivity parameters. We utilize priors on these coefficients to constrain the hypothetical severity of unobserved confounding. Our proposed approach allows researchers to benchmark the assumed strength of confounding on observed confounders more systematically than conventional frequentist sensitivity analysis techniques. Moreover, to cope with convergence issues typically encountered in nonidentified Bayesian models, we develop an efficient Markov chain Monte Carlo algorithm exploiting transparent parameterization (Gustafson [2005], Statistical Science 20(2), 111–140). We illustrate our proposed method in a Monte Carlo simulation study as well as an empirical example on the effect of war on inheritance tax rates.
- Research Article
- 10.3390/min15121330
- Dec 18, 2025
- Minerals
- Lijuan Zhang + 6 more
The exploration of marine minerals, essential for sustainable development, requires advanced techniques for accurate resource delineation. The self-potential (SP) method, sensitive to mineral polarization, has been increasingly deployed using autonomous underwater vehicles. This approach enables dense planar SP data acquisition, offering the potential to reduce inversion uncertainties through enhanced data volume. This study investigates the benefits of inverting planar SP datasets for improving the spatial delineation of subsurface deposits. An analytical solution was derived to describe SP responses of spherical polarization models under a planar measurement grid. An adaptive Markov chain Monte Carlo algorithm within the Bayesian framework was employed to quantitatively assess the constraints imposed by the enriched dataset. The proposed methodology was validated through two synthetic cases, along with a laboratory-scale experiment that monitored the redox process of a spherical iron–copper model. The results showed that, compared to single-line data, the planar data reduced the average error in parameter means from 10.9% and 6.4% to 4.1% and 1.7% for synthetic and experimental cases, respectively. In addition, the 95% credible intervals of model parameters narrowed by nearly 50% and 40%, respectively.
- Research Article
- 10.1007/s13253-025-00719-0
- Dec 17, 2025
- Journal of Agricultural, Biological and Environmental Statistics
- Paolo Onorati + 1 more
Abstract Environmental phenomena are influenced by complex interactions among various factors. For instance, the amount of rainfall measured at different stations within a given area is shaped by atmospheric conditions, orography, and physics of water processes. Motivated by the need to analyze rainfall across complex spatial domains, we propose a flexible Bayesian semi-parametric model for spatially distributed data. This method effectively accounts for spatial correlation while incorporating dependencies on geographical characteristics in a highly flexible manner. Indeed, using latent Gaussian processes, indexed by spatial coordinates and topographical features, the model integrates spatial dependencies and environmental characteristics within a nonparametric framework. Posterior inference is conducted using an efficient rejection-free Markov chain Monte Carlo algorithm, which eliminates the need for tuning parameter calibration, ensuring smoother and more reliable estimation. The model’s flexibility is evaluated through a series of simulation studies, involving different rainfall and spatial correlation scenarios, to demonstrate its robustness across various conditions. We then apply the model to a large dataset of rainfall events collected from the Italian North-East, these areas are known for their complex orography and diverse meteorological drivers. By analyzing this data, we generate detailed maps that illustrate the mean and standard deviation of rainfall and rainy days. method is implemented in a new R package available on GitHub.
- Research Article
- 10.1080/03610918.2025.2603532
- Dec 16, 2025
- Communications in Statistics - Simulation and Computation
- Valérie Girardin + 3 more
Gathered under the name of metabolic networks, trophic, biochemical, and urban networks are here handled as a single field. In the Linear Inverse Modeling framework, these highly complex objects of research are all mathematically represented by weighted oriented graphs whose vertices are compartments and edges are flows (or flux) of matter or energy. Flows satisfying realistic metabolic constraints belong to very anisotropic high dimensional polytopes that cannot be analytically determined. Sampling the polytope of solutions yields a set of possible scenarios for the metabolic network. Different Markov Chain Monte Carlo (MCMC) algorithms together with their most recent implementations are scrutinized, leading to design an updated R package called {samplelim}. Comparison of the most recent implementations in terms of both computation time and sampling performances follows a methodology involving acknowledged and new statistical diagnostics and indexes. Application on real data metabolic networks of the three types shows that {samplelim} gathers the best properties of previous implementations of these MCMC algorithms. Code Repositories: The source code of the package {samplelim} is publicly available from its GitHub repository. 1 The code to reproduce computations has its own private GitHub repository. 2
- Research Article
- 10.1080/00949655.2025.2593990
- Dec 6, 2025
- Journal of Statistical Computation and Simulation
- Lara Maleyeff + 2 more
Precision medicine aims to optimize treatment by identifying patient subgroups most likely to benefit from specific interventions. To support this goal, we introduce fkbma, an R package that implements a Bayesian model averaging approach with free-knot B-splines for identifying tailoring variables and treatment-sensitive subgroups. The package employs a reversible jump Markov chain Monte Carlo algorithm to flexibly model treatment effect heterogeneity while accounting for uncertainty in both variable selection and non-linear relationships. It provides a comprehensive framework for detecting predictive biomarkers and enabling robust subgroup identification in clinical trials and observational studies. This paper details the statistical methodology underlying fkbma, outlines its computational implementation, and demonstrates its application through simulated data examples. The flexibility of the package makes it a valuable tool for precision medicine research, offering a principled approach to treatment personalization.
- Research Article
- 10.1177/15578666251392595
- Dec 5, 2025
- Journal of computational biology : a journal of computational molecular cell biology
- Yao-Ban Chan + 2 more
Reconciliations are a mathematical tool to compare the phylogenetic trees of genes to the species that contain them, accounting for events such as gene duplication and loss. Traditional reconciliation methods have predominantly relied on parsimony to infer gene-only evolutionary events and usually make the hypothesis that genes evolve independently. Recently, more advanced models have been developed that account for complex gene interactions stemming from phenomena such as segmental duplications, where multiple genes undergo simultaneous duplication. In this article, we study the NP-hard problem of reconciling gene trees to a species tree with segmental duplications, without the aid of synteny information. We address this problem by proposing a novel probabilistic approach, imposing a Boltzmann distribution over the space of reconciliations. This allows for a Gibbs sampling-like Markov chain Monte Carlo algorithm that uses simulated annealing to effectively find or approximate the most parsimonious reconciliation, as demonstrated through rigorous simulations and re-analysis of empirical datasets. Our findings present a promising new framework for addressing NP-hard reconciliation challenges in phylogenetics, enhancing our understanding of gene evolution and its relationship with species evolution.
- Research Article
- 10.1515/mcma-2025-2023
- Dec 3, 2025
- Monte Carlo Methods and Applications
- Nadji Rahmania + 1 more
Abstract We show that Lasso and Bayesian Lasso are very close when the sparsity is large and the noise is small. We propose to solve Bayesian Lasso using multivalued stochastic differential equation. We derive four discretization algorithms, and present highly efficient multilevel Monte Carlo (MLMC) simulations. Additionally, we perform a numerical comparison of the Monte Carlo (MC), MLMC and proximal Markov chain Monte Carlo algorithm (PMALA).
- Research Article
- 10.1080/00949655.2025.2588591
- Dec 2, 2025
- Journal of Statistical Computation and Simulation
- Mojtaba Ganjali + 3 more
In this manuscript, we develop a unified joint modelling and estimation framework for zero-inflated count and longitudinal semi-continuous data, with a focus on models structured around the exponential family and two-part hurdle formulations. We first review and synthesize existing longitudinal hurdle models, identifying a common structure across diverse approaches. Motivated by this foundation, we introduce novel joint models that integrate semi-continuous longitudinal outcomes with time-to-event data, and propose new methods for dynamic prediction in the presence of semi-continuous outcomes. To facilitate flexible estimation and inference across this class of models, we propose a Bayesian estimation strategy based on a Markov Chain Monte Carlo (MCMC) algorithm. We have implemented these methods in the R package UHJM (available at https://github.com/tbaghfalaki/UHJM), providing accessible tools for parameter estimation and risk prediction. The utility of our framework is demonstrated through simulation studies and two real-world applications characterized by excess zeros.
- Research Article
- 10.1029/2024jb030770
- Dec 1, 2025
- Journal of Geophysical Research: Solid Earth
- Mareen Lösing + 7 more
Abstract The shared tectonic history of southwestern Australia and East Antarctica facilitates the exchange of geological insights between the regions. In this study, we present coupled susceptibility and density models obtained through the joint inversion of magnetic and gravity data. By assuming a common geological source for both signals, our coupling method minimizes misfits and variation in information, thereby enhancing a correlation between susceptibility and density. The resulting anomalies demonstrate structural continuity between the continents, aligning closely with major shear zones and seismic reflectors. Combining these results with machine learning, geochemical, and petrophysical databases, we predict a high‐resolution (10 km) heat production map for East Antarctica. Utilizing a Markov Chain Monte Carlo (MCMC) algorithm, we further develop a geothermal heat flow map with greater spatial variability than previous studies, yielding an average of mW/ in East Antarctica and mW/ in southwestern Australia. Our results provide a crucial high‐resolution boundary condition for ice sheet simulations, enabling more realistic estimates of basal meltwater production and ice temperatures.
- Research Article
- 10.37905/euler.v13i3.33769
- Dec 1, 2025
- Euler : Jurnal Ilmiah Matematika, Sains dan Teknologi
- Melati Sinta Nurdanita + 1 more
This study aims to estimate aggregate loss (total loss) in private passenger car insurance data using the Bayesian approach of the Markov Chain Monte Carlo (MCMC) algorithm Gibbs-Sampling with the help of OpenBUGS software. The approach was carried out by modeling claim frequency data using Geometric and Negative Binomial distributions, and claim severity using Gamma and Lognormal distributions. Next, the prior for each model was determined, along with calculations for the likelihood function, joint distribution, marginal distribution, and posterior distribution. Since the resulting posterior distribution could not be calculated analytically, simulation was performed using OpenBUGS software to calculate it. Simulation was also used in predictive posterior calculations to estimate future aggregate losses. The results show that the Bayesian approach with the Markov Chain Monte Carlo method using the Gibbs-Sampling algorithm and its implementation through OpenBUGS software can be used to estimate aggregate loss. From the simulations used, it was found that the estimation of aggregate loss for private passenger car insurance is influenced by the selection of the frequency and severity of claims models. The Negative-Gamma Binomial model produced the highest posterior predictive estimate of aggregate loss at $75270.0, while the Geometric-Lognormal model provided the lowest estimate at $70500.0. Meanwhile, the model with the smallest standard deviation is the Negative Binomial-Lognormal model, which is $62720.0. This study contributes to insurance risk modeling, particularly in determining reserve funds and setting insurance premiums tailored to the target market of insurance companies.
- Research Article
- 10.1145/3763286
- Dec 1, 2025
- ACM Transactions on Graphics
- Sascha Holl + 2 more
Markov chain Monte Carlo (MCMC) algorithms are indispensable when sampling from a complex, high-dimensional distribution by a conventional method is intractable. Even though MCMC is a powerful tool, it is also hard to control and tune in practice. Simultaneously achieving both rapid local exploration of the state space and efficient global discovery of the target distribution is a challenging task. In this work, we introduce a novel continuous-time MCMC formulation to the computer science community. Generalizing existing work from the statistics community, we propose a novel framework for adjusting an arbitrary family of Markov processes - used for local exploration of the state space only - to an overall process which is invariant with respect to a target distribution. To demonstrate the potential of our framework, we focus on a simple, but yet insightful, application in light transport simulation. As a by-product, we introduce continuous-time MCMC sampling to the computer graphics community. We show how any existing MCMC-based light transport algorithm can be seamlessly integrated into our framework. We prove empirically and theoretically that the integrated version is superior to the ordinary algorithm. In fact, our approach will convert any existing algorithm into a highly parallelizable variant with shorter running time, smaller error and less variance.
- Research Article
- 10.1098/rsos.251918
- Dec 1, 2025
- Royal Society Open Science
- Ajay Jasra + 2 more
Abstract In this article, we consider likelihood-based estimation of static parameters for a class of partially observed McKean–Vlasov (MV) diffusion process with discrete-time observations over a fixed time interval. In particular, using the framework of (Awadelkarim, Jasra, Ruzayqat 2024 SIAM J. Control Optim. 62, 2664–2694 (doi:10.1137/23M160298X)) we develop a new randomized multilevel Monte Carlo method for estimating the parameters, based upon Markovian stochastic approximation (MSA) methodology. New Markov chain Monte Carlo (MCMC) algorithms for the partially observed MV model are introduced facilitating the application of (Awadelkarim, Jasra, Ruzayqat 2024 SIAM J. Control Optim. 62, 2664–2694 (doi:10.1137/23M160298X)). We prove, under assumptions, that the expectation of our estimator is biased, but with expected small and controllable bias. Our approach is implemented on several examples.
- Research Article
- 10.30598/barekengvol20iss1pp0881-0894
- Nov 24, 2025
- BAREKENG: Jurnal Ilmu Matematika dan Terapan
- Didit Budi Nugroho + 2 more
This study compares the Log-linear Realized GARCH (LRG) and its extension with Continuous and Jump components (LRG-CJ) in modeling the volatility of financial assets, using daily data from the Tokyo Stock Price Index (TOPIX) over 2004–2011. The urgency arises from the need for more accurate volatility models during turbulent periods such as the 2008 Global Financial Crisis and the 2011 Great East Japan Earthquake, where markets exhibit both smooth fluctuations and abrupt jumps. Methodologically, the LRG-CJ framework introduces a novel integration of continuous and jump decomposition into the LRG structure, offering an applied innovation to high-frequency volatility modeling. Realized Volatility (RV) was calculated from 1-, 5-, and 10-minute intraday data and decomposed into continuous and jump components. Parameter estimation employed the Adaptive Random Walk Metropolis (ARWM) within a Markov Chain Monte Carlo algorithm, while model performance was assessed using multiple information criteria and out-of-sample forecast evaluations. The empirical results reveal that incorporating continuous and jump components improves volatility modeling accuracy, forecasting, and Value-at-Risk estimation. However, these benefits are frequency-dependent: the LRG-CJ model shows superior in-sample fit for 1-minute RV but provides the strongest out-of-sample forecasting and risk prediction at lower frequencies (5- and 10-minute intervals). This highlights that while jumps are best identified at ultra-high frequencies, their predictive value is most effectively captured in slightly aggregated data. The originality of this study lies in being the first empirical application of LRG-CJ, demonstrating how continuous–jump decomposition interacts with the dual-equation structure of LRG, which has not been examined in TGARCH or APARCH contexts. Limitations include sensitivity to microstructure noise in very high-frequency data and computational challenges in parameter convergence. Overall, the findings underscore the novelty and practical importance of the LRG-CJ framework for risk management, offering actionable guidance for aligning volatility models with data frequency
- Research Article
- 10.1038/s41598-025-25520-4
- Nov 24, 2025
- Scientific Reports
- Weihong Luo + 4 more
Water resources underpin human society and economic growth, yet freshwater is unevenly distributed, leaving arid regions severely water-stressed. The Beishan mining district in Inner Mongolia exemplifies this challenge: despite abundant minerals, it lacks surface water and depends almost entirely on groundwater. To improve exploration in such complex settings, we propose a Bayesian joint inversion that leverages the complementary sensitivities of Surface Nuclear Magnetic Resonance (SNMR) and Transient Electromagnetic (TEM) data within a probabilistic framework. Using a transdimensional Markov Chain Monte Carlo (MCMC) algorithm, the method adaptively balances data weighting and model complexity. Tests on synthetic and field datasets show that combining SNMR’s direct sensitivity to water content with TEM’s high-resolution resistivity imaging enhances aquifer detection across depths and enables quantitative uncertainty assessment. Applied in Beishan, the approach delineates promising aquifers, with results confirmed by drilling, offering a robust basis for groundwater exploration and sustainable management in arid regions.