Articles published on Markov Chain Monte Carlo
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
11308 Search results
Sort by Recency
- New
- Research Article
- 10.1007/s11222-026-10843-7
- Feb 13, 2026
- Statistics and Computing
- José Carlos García-Merino + 4 more
Abstract Bayes Factors provide a rigorous methodology for the Bayesian assessment of competing models. However, this approach faces inherent challenges. The computation of Bayesian evidence often involves evaluating high-dimensional, analytically intractable integrals. Moreover, Bayesian evidence is particularly sensitive to prior assumptions, which can significantly bias model comparison. While extensive research has been conducted to address the former limitation, the latter remains a challenging open area of research. To address this issue, this work introduces DRAM-NS, a new methodology combining Nested Sampling (NS) with adaptive Markov Chain Monte Carlo (MCMC) techniques for Bayesian model selection. Specifically, the developed technique enhances the traditional NS algorithm by incorporating a preliminary MCMC step on a subset of the available data, allowing for natural integration of non-informative or improper priors. The effectiveness of the proposed approach is demonstrated through several case studies. Numerical results and discussion demonstrate that DRAM-NS provides a more reliable framework than standard NS alone for model selection in scenarios where prior knowledge is uncertain.
- New
- Research Article
- 10.1080/10618600.2026.2627458
- Feb 11, 2026
- Journal of Computational and Graphical Statistics
- Fuheng Cui + 1 more
The family of log-concave density functions is an important class and contains various well known probability distributions, including the normal. Due to the shape restriction, it is possible to find a nonparametric estimate of the density; the nonparametric maximum likelihood estimator (NPMLE). However, uncertainty quantification about the NPMLE via confidence bounds is less well developed. Bayesian methods are also largely absent, though it is possible to construct constrained density functions and to use Markov chain Monte Carlo (MCMC) to sample from the posterior. In this paper, we start with the NPMLE and use a version of the martingale posterior distribution to determine uncertainty. The algorithm can be implemented in parallel and hence is fast. We prove existence of the posterior using suitable convergence of a submartingale. We also present illustrations and comparisons with alternative approaches which include real data.
- New
- Research Article
- 10.1051/0004-6361/202558155
- Feb 9, 2026
- Astronomy & Astrophysics
- E Spitoni + 7 more
The nuclear stellar disc (NSD) of the Milky Way is a dense, rotating stellar system in the central ∼200,pc. The NSD is thought to be primarily fuelled by bar-driven gas inflows from the inner Galactic disc. As part of the LEGARE project, we aim to construct the first chemical-evolution models for the NSD using a Bayesian approach tailored to reproducing the observed metallicity-distribution functions and compared with the available abundance ratios for Mg, Si, and Ca relative to Fe. In particular, we intend to test whether the flowing gas from the inner Galactic disc, which feeds the NSD, can reproduce the observed abundances. We adopted a state-of-the-art chemical-evolution model in which the gas responsible for the formation of the NSD is assumed to be driven by the Galactic-bar-induced inflows. The chemical composition of the accreted material is assumed to reflect that of the Galactic disc at a radius of ∼4 kpc. A Bayesian framework based on Markov Chain Monte Carlo (MCMC) techniques was then employed to fit the metallicity-distribution functions of different samples of NSD stars. If we take the NSD data at face value, without considering possible contamination from bulge stars, we find that a formation scenario based on the inner disc's flowing gas is inconsistent with the low-metallicity tail of the observed metallicity-distribution function. This is because the inner disc's metallicity, at the epoch of bar formation, was already near solar. On the other hand, models invoking dilution from additional metal-poor inflows successfully reproduce the observations. Models with different levels of gas dilution share similar gas infall timescales (ranging from 3.7 to 5.2 Gyr) and negligible galactic winds (mass-loading factors, ω, between 0.001 and 0.030). The best-fit model corresponds to an inflow with a metallicity five times lower than that of the inner disc and a moderate star-formation efficiency. The same model successfully reproduces the observed α/Fe - Fe/H abundance trends and predicts a star formation history consistent with the most recent estimates. However, if we assume that the metallicity distribution function is contaminated by metal-poor bulge stars and is restricted to stars with Fe/H > -0.3 dex, there is no longer any need for gas dilution. In this case, the best-fit model is characterised by a very low star formation efficiency, coupled with a mild galactic wind. Our analysis indicates that dilution of the inflowing gas forming the NSD is necessary to reproduce its observed chemical properties, if bulge contamination in the data is not considered. This implies that, in addition to bar-driven inflows from the inner thin disc, lower metallicity gas -- possibly originating from the thick disc or from more recent accretion events -- contributed to the formation of the NSD. On the other hand, when contamination by bulge stars is assumed, dilution is no longer required.
- New
- Research Article
- 10.1142/s0217732326500793
- Feb 6, 2026
- Modern Physics Letters A
- Bhupinder + 1 more
This paper investigates a Kaniadakis holographic dark energy (KHDE) model within a flat Friedmann-Lemaitre-Robertson-Walker (FLRW) universe, utilizing the Hubble horizon as infrared (IR) cutoff. Based on Kaniadakis’ relativistic generalized framework, the model provides a dynamical explanation for the late-time acceleration of the universe. We employ a kinematic parametrization to solve the field equations and constrain the model’s free parameters - specifically the Hubble constant [Formula: see text] and the expansion index [Formula: see text] - using Bayesian inference and Markov Chain Monte Carlo (MCMC) simulations. Our analysis incorporates multiple observational datasets, including 57 OHD points, 1048 Pantheon SNIa events, and 6 BAO measurements. The best-fit value for the Hubble constant [Formula: see text] for OHD and by following OHD +BAO +Pantheon dataset, [Formula: see text]. Evolutionary diagnostics, including the deceleration parameter [Formula: see text], equation of state [Formula: see text], statefinder [Formula: see text], [Formula: see text] diagnostics, and jerk parameter [Formula: see text] indicate that the model transitions from quintessence era ([Formula: see text]) into phantom region ([Formula: see text]), eventually converging to a de-Sitter phase in far future. Furthermore, the violation of the strong energy condition (SEC) and dominant energy conditions (DEC) provides physical validation for the observed cosmic acceleration. An analysis of classical stability confirms that the model remains consistent with the causality condition.
- New
- Research Article
- 10.64497/jssci.174
- Feb 5, 2026
- Journal of Statistical Sciences and Computational Intelligence
- Aliyu Abba Mustapha + 3 more
This paper presents a Bayesian spatio-temporal model with space-time interaction effects for longitudinal data. The main objective is to evaluate how spatial and temporal dependencies, together with their interactions, influence parameter estimation and interpretation. The model incorporates spatial random effects to capture unobserved heterogeneity between neighboring regions, temporal random effects to reflect trends over time, and interaction terms to account for localized space-time variations. A conditional autoregressive (CAR) prior is applied to address spatial dependence, while Markov chain Monte Carlo (MCMC) sampling is used for posterior estimation, supported by convergence diagnostics such as trace plots and the Geweke test. Bootstrap analysis is also applied to assess the stability of estimates and provide complementary validation. Results based on simulated datasets across multiple areal unit sizes show that the intercept and covariate effects are sensitive to spatial resolution, whereas spatial and temporal correlations remain relatively stable across scales. The variance components, particularly the interaction term, capture localized heterogeneity more effectively at smaller spatial units. The findings demonstrate that combining Bayesian estimation with bootstrap analysis provides a reliable framework for understanding spatial and temporal disease dynamics, with practical implications for public health planning and intervention strategies.
- New
- Research Article
- 10.1177/01466216261415631
- Feb 3, 2026
- Applied psychological measurement
- Guangming Li
The Markov chain Monte Carlo (MCMC) method is more and more widely used to estimate variance components in generalizability theory (GT). However, as an essential part of MCMC method, uninformative priors haven't been explored and different GT researches vary in the use of uninformative priors. This study focused on effect of the different uninformative priors on estimating variance components. Based on p × i × r design, eight uninformative prior distributions were chosen for simulation study and empirical study, including [prior 1], [prior 2], [prior 3], [prior 4], [prior 5], , [prior 7], and [prior 8]. The three posterior point estimations (i.e., mean, median and mode) with full data and 10% missing/sparse data were as calculated as well. After conducting simulation study and empirical study, the result shows that: (1) [prior 1] performs best and more stably in posterior point estimations in most scenarios, while [prior 6] is always the worst one; (2) The differences among methods are mainly reflected in variance component and and prior 6 has obvious extreme bias values with the maximum value even reaching 281.09 and 167.59; (3) Posterior mean estimations always produce the biggest biases, but posterior median estimations are the best; (4) The differences in estimating variance components between uninformative priors become greater when the number of levels of the variance components is small; (5) The results between full data and 10% missing/sparse data are about the same. The small amount of missing/sparse data has a minimal impact on the results. The running time of eight distributions ranges from 489.78 to 692.58 seconds and does not differ from each other too much.
- New
- Research Article
- 10.1016/j.jtbi.2025.112313
- Feb 1, 2026
- Journal of theoretical biology
- Yang Deng + 1 more
Mathematical modeling of tuberculosis with two strains, seasonality, and age heterogeneity.
- New
- Research Article
- 10.1002/ceat.70177
- Feb 1, 2026
- Chemical Engineering & Technology
- Farahanaz M Bagwan + 3 more
ABSTRACT Catalytic dehydrogenation of decahydroquinoline (DHQ) to quinoline is a promising pathway for hydrogen release in liquid organic hydrogen carrier systems. In this work, solvent‐free DHQ dehydrogenation over Pd/Al 2 O 3 is systematically investigated to evaluate hydrogen release performance and reaction kinetics. High DHQ conversion (83.9%) and degree of dehydrogenation (82.7%) are achieved at optimal reaction conditions. A power‐law kinetic model based on a simplified reaction mechanism is developed and simulated using a Markov Chain Monte Carlo (MCMC) approach for estimation of rate constants and validation of concentration profiles with experimental data. The apparent activation energies are determined to be 45.85 kJ/mol for DHQ to 5,6,7,8‐tetrahydroquinoline (bz‐THQ) and 185.43 kJ/mol for bz‐THQ to quinoline formation, identifying latter as the rate‐limiting step. This framework provides mechanistic insight and supports the potential of DHQ as an efficient hydrogen carrier.
- New
- Research Article
- 10.1088/1475-7516/2026/02/043
- Feb 1, 2026
- Journal of Cosmology and Astroparticle Physics
- Micol Benetti + 2 more
We investigate a relativistic cosmological model with background rotation, sourced by a non-perfect fluid with anisotropic stress. A modified version of the CLASS Boltzmann code is employed to perform Monte Carlo Markov Chain analyses against Cosmic Microwave Background (CMB) and late-time datasets. The results show that current CMB data constrain the present-day rotation parameter to be negligible. As a consequence, the derived cosmological parameters remain consistent with the standard ΛCDM values. In contrast, late-time probes such as Type Ia supernovae (SNe) and Baryonic Acoustic Oscillations (BAO) allow for a higher level of rotation and yield an increased Hubble constant. However, this comes at the cost of a higher σ8, which remains in tension with DES-Y3 measurement. Combining CMB, SNe and BAO data confirms the preference for non-rotation.
- New
- Research Article
- 10.1007/s11538-026-01596-5
- Jan 31, 2026
- Bulletin of mathematical biology
- Xuyuan Wang
Parameter nonidentifiability is a critical challenge in infectious disease modeling, where infinitely many parameter values produce equally good fits to observed data but lead to significantly different future predictions. Many methods have been developed to address this issue, including mathematical analysis, computational techniques, and statistical approaches. While each provides valuable insights, the integration of computationally efficient identifiability analysis with Bayesian inference for practical parameter estimation has received relatively less attention. In this paper, we incorporate a sensitivity matrix based identifiability analysis into a Bayesian framework to assess parameter identifiability. By examining identifiability under prior distribution, we design Markov Chain Monte Carlo (MCMC) algorithms that integrate identifiability information to enhance the mixing and efficiency of the sampler. Posterior identifiability analysis can then be performed using the sampling results to assess the practical nonidentifiability of a model. By comparing the posterior nonidentifiability results across different models, our method enables principled model selection strategies that penalize nonidentifiable models within a rigorous Bayesian setting. Numerical studies confirm that widely used epidemic models such as SIR, SEIR, and SEIAR are often practically nonidentifiable when calibrated with limited data, underscoring the importance of model parsimony.
- New
- Research Article
- 10.1088/1361-6382/ae3e35
- Jan 27, 2026
- Classical and Quantum Gravity
- Sanjay Ramkrishna Bhoyar + 1 more
Abstract We explore the cosmological behavior of the recently proposed f(R, Σ, T) gravity theory, which generalizes the standard f(R, T) framework by incorporating an additional Ricci-torsion scalar Σ to account for torsion-induced effects within the spacetime geometry. Employing the Efstathiou equation of state parametrization in a spatially flat Friedmann-Lemaître-Robertson-Walker (FLRW) background, we derive the modified field equations and reconstruct the Hubble parameter analytically. The model parameters are constrained using the cosmic chronometer CC and Pantheon+SH0ES supernova datasets via a Markov Chain Monte Carlo (MCMC) approach. Our analysis reveals that the f(R, Σ, T) framework can successfully reproduce the universe’s late-time acceleration without the need for an explicit cosmological constant. The resulting effective equation of state evolves within the quintessence regime and approaches the ΛCDM limit at low redshift. Energy conditions, sound speed stability, and thermodynamic consistency are examined, confirming the physical viability of the model. These findings suggest that the inclusion of the Ricci-torsion term Σ offers a promising geometrical mechanism for cosmic acceleration and alleviates current observational tensions in Hubble parameter measurements.
- New
- Research Article
- 10.64898/2026.01.21.700905
- Jan 24, 2026
- bioRxiv
- Xiangning Xue + 6 more
Transcriptomic circadian analysis of human post-mortem brain provides a unique opportunity to characterize in vivo molecular circadian rhythms across brain regions implicated in aging and psychiatric disorders. A primary goal in such analyses is the detection of circadian biomarkers. However, this task is complicated by the frequent mismatch between a subject’s recorded circadian clock time and their true molecular circadian time — arising from observational or recording errors, as well as intrinsic biological variability. Existing methods typically address either biomarker detection or circadian time prediction in isolation. Because errors in one task can degrade performance in the other, the lack of a unified approach remains a key limitation. We propose BayCT — a Bayesian model for simultaneous circadian marker detection and molecular circadian time estimation. The model extends naturally to repeated measurements from multiple brain regions or organs. For circular data, we employ a von Mises prior distribution, with slice sampling and reversible-jump Markov chain Monte Carlo (MCMC) for Bayesian inference. Through extensive simulations and applications to transcriptomic data from three human brain regions and from 12 mouse organs, BayCT demonstrates superior performance in both biomarker detection and circadian time estimation. Furthermore, we highlight the advantages of integrating data across brain regions, achieving substantial improvements in both tasks.
- New
- Research Article
- 10.1002/mrm.70255
- Jan 23, 2026
- Magnetic resonance in medicine
- Florian Birk + 8 more
To investigate how the relaxation rates (R1, R2) and asymmetry indices (AI), derived from phase-cycled balanced steady-state free precession (pc-bSSFP) data, depend on the orientation of white matter (WM) fiber tracts at different field strengths. Phase-cycled bSSFP data acquired at 3 and 9.4T in the healthy human brain were processed using motion-insensitive rapid configuration relaxometry (MIRACLE) and a frequency response analysis to derive R1, R2, and AI values, respectively. Fractional anisotropy (FA) and fiber-to-field angle (θ) were estimated based on 3T diffusion tensor imaging. The orientation dependence of R1, R2, and AI in WM was characterized using literature model fits as well as Monte Carlo random walk simulations to explore the influence of field strength and susceptibility effects. R2 and AI exhibited a pronounced orientation dependence while the influence of anisotropy on R1 was weaker, but noticeable. The observed anisotropy increased systematically from 3 to 9.4T. Literature models assuming either a susceptibility or a generalized magic angle effect described the R2 and AI anisotropy to a high degree (R2 ≥ 0.99). The calculated partial contributions of susceptibility to R2 anisotropy increased from 24.0%-39.0% at 3T to 77.0%-87.1% at 9.4T. The Monte Carlo simulations were able to reproduce the characteristics of R2 anisotropy, but not its strength. Microstructure-driven relaxation anisotropy considerably affects pc-bSSFP relaxometry, in particular R2. The findings indicate that R2 anisotropy is driven by susceptibility at ultra-high fields whereas additional mechanisms likely contribute at lower field strengths.
- New
- Research Article
- 10.1142/s0219887826501471
- Jan 22, 2026
- International Journal of Geometric Methods in Modern Physics
- Manish Yadav + 3 more
In this study, we explore a cosmological framework based on Hoyle-Narlikar’s creation-field theory by introducing a novel form of the creation field with respect to time, [Formula: see text], within the spatially flat Friedmann-Lemaître-Robertson-Walker (FLRW) spacetime. We also investigate the behaviour of dark energy and cosmic acceleration/deceleration based on our proposed model (Hoyle-Narlikar gravity model) with new creation field. To test its observational consistency, we constrain the independent parameters [Formula: see text] and some derived parameters [Formula: see text] using Markov Chain Monte Carlo (MCMC) analysis with the emcee sampler, incorporating the Observational Hubble Data (OHD) and Pantheon Plus (PP) supernova datasets. Our analysis yields a best-fit value of the Hubble constant, [Formula: see text] at 68% C.L. from the OHD+PP data. This result agrees well with the SH0ES calibration and alleviates the [Formula: see text] Hubble tension between the Planck and SH0ES measurements. Our model further predicts a deceleration–acceleration transition at redshift [Formula: see text], with a present deceleration parameter [Formula: see text], and estimates the current age of the universe to be in the range of [Formula: see text] Gyr, consistent with observational bounds. The creation field coupling constant [Formula: see text] remains positive, ensuring physical stability. Moreover, analysis of the [Formula: see text] diagnostic reveals a distinct dynamical behavior of dark energy:the slope of [Formula: see text] transitions from negative to positive at [Formula: see text], indicating a shift from a quintessence-like phase to a phantom regime, and from positive to negative at [Formula: see text], suggesting a subsequent return from phantom to quintessence. The Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) values of our model are slightly higher than those of the standard and dynamical dark energy models, indicating that our model remains statistically consistent.
- New
- Research Article
- 10.1140/epjc/s10052-025-15250-2
- Jan 21, 2026
- The European Physical Journal C
- Shubham Barua + 3 more
Abstract We study frequentist confidence intervals based on graphical profile likelihoods (Wilks’ theorem, likelihood integration), and the Feldman–Cousins (FC) prescription, a generalisation of the Neyman belt construction, in a setting with non-Gaussian Markov chain Monte Carlo (MCMC) posteriors. Our simplified setting allows us to recycle the MCMC chain as an input in all methods, including mock simulations underlying the FC approach. We find all methods agree to within $$10 \%$$ 10 % in the close to Gaussian regime, but extending methods beyond their regime of validity leads to greater discrepancies. As a key consistency check, we recover a shift in cosmological parameters between low and high redshift cosmic chronometer data with the FC method, but only when one fits all parameters back to the mocks. We observe that fixing parameters, a common approach in the literature, risks underestimating confidence intervals.
- New
- Research Article
- 10.1093/gji/ggag022
- Jan 20, 2026
- Geophysical Journal International
- H Ghadjari + 4 more
Summary Electrical resistivity tomography (ERT) is used to infer the subsurface resistivity structure. ERT requires solving a nonlinear inverse problem that is often approximated as linear to reduce computational time. However, the approximation requires assumptions that cause limitations for the data analysis. Most of the computational time is due to the forward problem that requires solving the Poisson equation. Recently, similar forward problems have been shown to be replaceable with a surrogate model of lower computational cost. We present a geoelectric surrogate based on Fourier Neural Operators (FNO) and demonstrate a successful application in nonlinear inversion. The standard deviation of the surrogate prediction error for unseen samples is <5%. Furthermore, the surrogate reduces computational time by over three orders of magnitude per realization, enabling ERT for previously intractable settings. We apply the surrogate in Markov chain Monte Carlo (MCMC) inversion of simulated data. The results resolve sharp resistivity changes with plausible uncertainties.
- Research Article
- 10.1111/2041-210x.70240
- Jan 19, 2026
- Methods in Ecology and Evolution
- Shu Xie + 2 more
Abstract State‐dependent speciation and extinction (SSE) models are a popular framework for quantifying whether species traits have an impact on evolutionary rates and how this shapes the variation in species richness among clades in a phylogeny. However, SSE models are becoming increasingly complex, limiting the application of likelihood‐based inference methods. Approximate Bayesian computation (ABC), a likelihood‐free approach, is a potentially powerful alternative for estimating parameters. Here, we develop an ABC framework to estimate state‐dependent speciation, extinction and transition rates from phylogenetic trees in BiSSE (binary state dependent speciation and extinction), GeoSSE (geographic state dependent speciation and extinction) and MuSSE (multiple state‐dependent speciation and extinction) models. Using different sets of candidate summary statistics, we then compare the inference ability of ABC with that of using likelihood‐based maximum likelihood (ML) and Markov chain Monte Carlo (MCMC) methods to identify the combinations that best capture the complex relationships between rates of diversification and species traits. Our results show the ABC algorithm can accurately estimate state‐dependent diversification rates for most of the model parameter sets we explored. The inference error of the parameters associated with the species‐poor state is larger with ABC than in the likelihood estimations only when the speciation rate ( λ ) is highly asymmetric between states in all three models. We suggest that the combination of normalized lineage‐through‐time (nLTT) statistics and phylogenetic signal constitutes efficient summary statistics for the ABC method. By providing an efficient algorithm and a set of suitable summary statistics, our work aims to contribute to the use of the ABC approach in the development of complex SSE models, for which a likelihood is not available.
- Research Article
- 10.3847/1538-4365/ae2259
- Jan 15, 2026
- The Astrophysical Journal Supplement Series
- Zhijun Tu + 4 more
Abstract In order to test the robustness and reliability of the new generation spectral-line identifier PyEMILI, as initially introduced in Paper I, in line identification and establish a reference/benchmark dataset for future spectroscopic studies, we run the code on the line lists of a selected sample of emission-line nebulae, including planetary nebulae (PNe), H ii regions, and Herbig–Haro objects with deep high-dispersion spectroscopic observations published over the past two decades. The automated line identifications by PyEMILI demonstrate significant improvements in both completeness and accuracy compared to the previous manual identifications in the literature. Since our last report of PyEMILI, the atomic transition database used by the code has been further expanded by crossmatching the Kurucz line lists. Moreover, to aid the PyEMILI identification of numerous faint optical recombination lines (ORLs) of C ii , N ii , O ii , and Ne ii , we compiled a new dataset of effective recombination coefficients for these nebular lines, and created a new subroutine in the code to generate theoretical spectra of heavy-element ORLs at various electron temperature and density cases; these theoretical spectra can be used to fit the observed recombination spectrum of a PN to obtain the electron temperature, density, and ionic abundances using the Markov Chain Monte Carlo (MCMC) method. We present MCMC-derived parameters for a sample of PNe. This work establishes PyEMILI as a robust and versatile tool for both line identification and plasma diagnostics in deep spectroscopy of gaseous nebulae.
- Research Article
- 10.48204/j.tecno.v28n1.a8961
- Jan 13, 2026
- Tecnociencia
- Franklin Simón Vásquez Guardia
The transit method, with over 4000 confirmed exoplanets to date, remains one of the most prolific and robust techniques in the field. This study focuses on the WASP-10 system, a K5-type star located 92 parsecs from Earth, which hosts the confirmed exoplanet WASP-10b. We present a detailed analysis of data from the TESS and TASTE surveys to characterize the planetary parameters of WASP-10b and compare them with values reported in the literature. Although both surveys employ the transit method, their observational strategies and data reduction pipelines differ substantially. Consequently, the datasets were processed independently and subsequently analyzed using a Markov Chain Monte Carlo (MCMC) framework to determine the orbital and physical parameters that best reproduce the observed light curves.
- Research Article
- 10.1080/10618600.2025.2612246
- Jan 7, 2026
- Journal of Computational and Graphical Statistics
- Geonhee Han + 1 more
Importance sampling (IS) is commonly used for cross validation (CV) in Bayesian models, because it only involves reweighting existing posterior draws without needing to re-estimate the model by re-running Markov chain Monte Carlo (MCMC). For hierarchical models, standard IS can be unreliable; the out-of-sample generalization hypothesis may involve structured case-deletion schemes which significantly alter the posterior geometry. This can force costly MCMC re-runs and make CV impractical. As a principled alternative, we tailor adaptive sequential Monte Carlo to sample along a path of posteriors that leads to the case-deleted posterior. The sampler is designed to support various hypotheses by accommodating diverse CV designs, and to streamline the workflow by automating path construction and systematically minimizing MCMC intervention. We demonstrate its utility with three types of predictive model assessment: longitudinal leave-group-out CV, group K-fold CV, and sequential one-step-ahead validation. Supplementary materials are available online.