Articles published on Stochastic approximation
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
3497 Search results
Sort by Recency
- New
- Research Article
- 10.1002/psp4.70186
- Jan 1, 2026
- CPT: pharmacometrics & systems pharmacology
- Donato Teutonico + 3 more
Physiologically Based Pharmacokinetic (PBPK) modeling is a powerful tool in drug development that integrates drug-specific information with physiological parameters to predict drug concentrations. However, parameter estimation in PBPK models presents significant challenges due to the large number of parameters involved and limited observed data. This tutorial introduces a novel approach coupling whole-body PBPK (WB-PBPK) models with population estimation methods (popWB-PBPK) to leverage individual data and estimate inter-individual variability on physiologically relevant parameters. The framework employs an optimized Stochastic Approximation Expectation-Maximization (SAEM) algorithm, reducing the estimation runtime through an adaptive parameter grid optimization and linear interpolation techniques. Using theophylline as a case study, we illustrate how this approach can accurately estimate drug-specific parameters (CYP1A2 clearance and lipophilicity) while incorporating covariate effects (smoking status). The optimized algorithm significantly reduces computational time compared to the standard SAEM algorithm. Our implementation in the saemixPBPK R package provides an accessible framework for parameter estimation in PBPK models, enabling more robust predictions of pharmacokinetic behavior leveraging individual data. This approach represents an important advancement in mechanistic modeling, allowing simultaneous estimation of population parameters, variability, and uncertainty while maintaining the physiological relevance of PBPK models.
- Research Article
- 10.1007/s10957-025-02892-1
- Dec 17, 2025
- Journal of Optimization Theory and Applications
- Arsen Hnatiuk + 4 more
Stochastic Dynamical Low-Rank Approximation in the Context of Machine Learning
- Research Article
- 10.3390/precisoncol1010002
- Dec 10, 2025
- Precision Oncology
- Marie Fusella Giuntini + 3 more
Background/Objectives: Radioactive iodine (RAI) therapy is widely used to treat metastatic differentiated thyroid cancer. To investigate physiological determinants of treatment response, a mechanistic model was developed, formulated as a system of coupled ordinary differential equations. Methods: The model captures the interactions between tumor burden, thyroglobulin (Tg) production and clearance, and radioactive iodine activity within a pharmacokinetic–pharmacodynamic framework. Model parameters were estimated using the Monte Carlo Stochastic Approximation Expectation–Maximization (MCMCSAEM) algorithm, based on clinical data from a cohort of 50 patients. Results: Tumor radiosensitivity (ρ) and initial tumor burden (N0) consistently emerged as the most influential factors in both responder and non-responder groups classified by disease doubling time under RAI (Td). A reduced model using only these two parameters preserved the principal response patterns of the full model. Other parameters influenced transient dynamics but had limited effect on overall Tg variance. Conclusions: These results support the use of a reduced calibration approach focused on ρ, N0, and the effective doubling time Td. The findings establish a theoretical foundation for developing tractable dynamic surrogates that reproduce the main treatment kinetics and support model-based clinical decision-making in RAI therapy.
- Research Article
- 10.1016/j.spa.2025.104759
- Dec 1, 2025
- Stochastic Processes and their Applications
- Vivek S Borkar
Stochastic approximation with two time scales: The general case
- Research Article
- 10.1098/rsos.251918
- Dec 1, 2025
- Royal Society Open Science
- Ajay Jasra + 2 more
Abstract In this article, we consider likelihood-based estimation of static parameters for a class of partially observed McKean–Vlasov (MV) diffusion process with discrete-time observations over a fixed time interval. In particular, using the framework of (Awadelkarim, Jasra, Ruzayqat 2024 SIAM J. Control Optim. 62, 2664–2694 (doi:10.1137/23M160298X)) we develop a new randomized multilevel Monte Carlo method for estimating the parameters, based upon Markovian stochastic approximation (MSA) methodology. New Markov chain Monte Carlo (MCMC) algorithms for the partially observed MV model are introduced facilitating the application of (Awadelkarim, Jasra, Ruzayqat 2024 SIAM J. Control Optim. 62, 2664–2694 (doi:10.1137/23M160298X)). We prove, under assumptions, that the expectation of our estimator is biased, but with expected small and controllable bias. Our approach is implemented on several examples.
- Research Article
- 10.1115/1.4070252
- Nov 27, 2025
- Journal of Energy Resources Technology, Part B: Subsurface Energy and Carbon Capture
- Xiaoguang Wang + 7 more
Abstract Layered water injection continues to serve as a critical management technique in optimizing oilfield development, particularly in heterogeneous, multilayered reservoirs with high water content. Traditional methodologies, which often rely on computationally intensive geological modeling, face limitations in efficiently addressing the dynamic challenges of injection allocation. This study introduces an innovative data-driven strategy that leverages existing reservoir geological data and historical production records to estimate vertical and horizontal water injection allocations with enhanced precision. By circumventing the need for complex geological models, the proposed approach significantly reduces computational demands while refining injection protocols through robust analysis of historical performance metrics. Key advancements include the development of a systematic framework for calculating reservoir- and well-specific water injection ratios, coupled with an improved simultaneous perturbation stochastic approximation (SPSA) algorithm to optimize injection-recovery dynamics. Empirical validation in the Xinjiang L reservoir demonstrated notable improvements: Over a two-year implementation period in representative well groups, cumulative oil production increased by 5.98%, while cumulative water injection and well-zone water content decreased by 3.74% and 3.58%, respectively, compared to conventional practices. These results underscore the method's efficacy in enhancing injection efficiency and reservoir management, offering a scalable solution for heterogeneous multilayer systems. The study contributes to petroleum engineering by presenting a pragmatic, data-centric alternative to traditional modeling, with direct implications for reducing operational costs and extending reservoir lifecycles. This approach not only advances academic discourse on injection optimization but also provides field practitioners with a deployable strategy for sustainable resource extraction.
- Research Article
- 10.1063/5.0299580
- Nov 17, 2025
- The Journal of chemical physics
- Christian Sommerfeld + 1 more
The early stages of aggregation of amyloidogenic proteins, such as amyloid-β (Aβ), are of great interest due to the possible pathogenic nature of small oligomeric aggregates. To shed light on the thermodynamics of this aggregation process, we perform a comparative study of the dimerization of Aβ(1-40) and Aβ(1-42) using an intermediate resolution protein model (PRIME20) and a flat-histogram Monte Carlo technique, stochastic approximation Monte Carlo. We show that aggregation drives secondary structure formation in both variants of Aβ. The dimers show a prevalence of β-sheet formation near the N-terminus of the chains and the beginning of β-sheet formation in the center of the chains, where the cross-beta structure will form for the mature amyloid fibril. Aβ(1-42) exhibits a stronger contribution of intermolecular hydrogen bonding compared to Aβ(1-40). It also shows a better defined intermolecular hydrogen-bonding pattern and less structural polymorphism of the dimer. Both findings constitute a molecular picture for the observed phenomenology of faster aggregation and growth of Aβ(1-42) amyloid fibrils compared to the Aβ(1-40) ones.
- Research Article
1
- 10.1080/01621459.2025.2587922
- Nov 16, 2025
- Journal of the American Statistical Association
- Seunghyun Lee + 1 more
In the era of generative AI, deep generative models (DGMs) with latent representations have gained tremendous popularity. Despite their impressive empirical performance, the statistical properties of these models remain underexplored. DGMs are often overparametrized, non-identifiable, and uninterpretable black boxes, raising serious concerns when deploying them in high-stakes applications. Motivated by this, we propose interpretable deep generative models for rich data types with discrete latent layers, called Deep Discrete Encoders (DDEs). A DDE is a directed graphical model with multiple binary latent layers. Theoretically, we propose transparent identifiability conditions for DDEs, which imply progressively smaller sizes of the latent layers as they go deeper. Identifiability ensures consistent parameter estimation and inspires an interpretable design of the deep architecture. Computationally, we propose a scalable estimation pipeline of a layerwise nonlinear spectral initialization followed by a penalized stochastic approximation EM algorithm. This procedure can efficiently estimate models with exponentially many latent components. Extensive simulation studies for high-dimensional data and deep architectures validate our theoretical results and demonstrate the excellent performance of our algorithms. We apply DDEs to three diverse real datasets with different data types to perform hierarchical topic modeling, image representation learning, and response time modeling in educational testing.
- Research Article
- 10.1038/s41598-025-21709-9
- Nov 10, 2025
- Scientific Reports
- Elizaveta Tarasova + 3 more
We present a decentralized two-layer architecture for dynamic task assignment in multi-agent systems, designed to operate under partial observability, noisy feedback, and limited communication. The system consists of adaptive controllers that predict task parameters via recursive regression with forgetting and selectively broadcast tasks to a small subset of agents based on relevance and availability. To ensure consistency of task models across the network, we introduce a distributed optimization procedure that combines Simultaneous Perturbation Stochastic Approximation (SPSA) with consensus-based synchronization. The proposed approach enables scalable, online task allocation without centralized coordination. As a representative application, we evaluate the system on simulated workloads involving prompt-based tasks assigned to a diverse set of large language models (LLMs), demonstrating its robustness across varying noise levels, task dynamics, and input arrival patterns.
- Research Article
- 10.1016/j.sysconle.2025.106250
- Nov 1, 2025
- Systems & Control Letters
- Vivek S Borkar
Stochastic approximation in non-Markovian environments
- Research Article
- 10.5687/sss.2025.21
- Oct 28, 2025
- Proceedings of the ISCIE International Symposium on Stochastic Systems Theory and its Applications
- Qiming Tan + 2 more
A Stopping Rule for Linear Stochastic Approximation with Martingale Difference Noise
- Research Article
- 10.1080/00949655.2025.2575115
- Oct 28, 2025
- Journal of Statistical Computation and Simulation
- Sooyoung Cheon
Exact inference in conditional logistic regression is often attempted with Markov chain Monte Carlo (MCMC) but performs poorly in small or sparse samples and near-separation because valid datasets are rare, causing poor mixing and unreliable results. We propose SIS-CLR (stochastic approximation Monte Carlo importance sampling for conditional logistic regression), which integrates the stochastic approximation Monte Carlo framework with adaptive importance sampling. SIS-CLR draws from an enlarged reference set that also includes auxiliary datasets violating the sufficient-statistic constraints for nuisance parameters. This design improves the mixing efficiency of the Markov chain, accelerates convergence, and ensures a desired proportion of valid samples. The method remains reliable in difficult inferential settings, such as near-boundary data or perfect separation, where likelihood-based or asymptotic approaches often fail. Simulations and real-data analyses show that SIS-CLR yields more accurate and stable p-value estimates than existing methods while substantially reducing computation. Together, these results position SIS-CLR as a practical and theoretically grounded tool for exact conditional inference in challenging logistic regression problems.
- Research Article
- 10.54117/ijps.v2i2.12
- Oct 21, 2025
- IPS Journal of Physical Sciences
- Ekemini U George + 3 more
This study aims at extending the idea of common solutions to problems in classical functional analysis to accommodate situations where there are randomness in the system, as real life problems are, mostly, of this nature. A common solution to random split feasibility and random variational inequality problems, called random split variational inequality problem, is sought through fixed point theory, using a nonexpansive operator. A random type of the two-step Wang’s algorithm is used to obtain a unique solution to the problem; and a strong convergence to this unique solution is proven. The result is applied to optimal tax policy problem and is seen to be adequate in solving the problem, yielding tax rates of 14.79% and 13.91% for the two categories of businesses. This result extends, and unifies some established results in the literature on deterministic functional analysis.
- Research Article
- 10.1007/s10957-025-02858-3
- Oct 17, 2025
- Journal of Optimization Theory and Applications
- Xiang Cheng + 3 more
Abstract This paper develops a hybrid deep reinforcement learning approach to manage an insurance portfolio for diffusion models. To address the model uncertainty, we adopt the recently developed modelling of exploration and exploitation strategies in a continuous-time decision-making process with reinforcement learning. We consider an insurance portfolio management problem in which an entropy-regularized reward function and corresponding relaxed stochastic controls are formulated. To obtain the optimal relaxed stochastic controls, we develop a Markov chain approximation and stochastic approximation-based iterative deep reinforcement learning algorithm where the probability distribution of the optimal stochastic controls is approximated by neural networks. In our hybrid algorithm, both Markov chain approximation and stochastic approximation are adopted in the learning processes. The idea of using the Markov chain approximation method to find initial guesses is proposed. A stochastic approximation is adopted to estimate the parameters of neural networks. Convergence analysis of the algorithm is presented. Numerical examples are provided to illustrate the performance of the algorithm.
- Research Article
- 10.3390/math13203281
- Oct 14, 2025
- Mathematics
- Shenggang Zhang + 2 more
This paper analyzes the exponential convergence properties of Symmetric Stochastic Bernstein Polynomials (SSBPs), a novel approximation framework that combines the deterministic precision of classical Bernstein polynomials (BPs) with the adaptive node flexibility of Stochastic Bernstein Polynomials (SBPs). Through innovative applications of order statistics concentration inequalities and modulus of smoothness analysis, we derive the first probabilistic convergence rates for SSBPs across all Lp (1≤p≤∞) norms and in pointwise approximation. Numerical experiments demonstrate dual advantages: (1) SSBPs achieve comparable L∞ errors to BPs in approximating fundamental stochastic functions (uniform distribution and normal density), while significantly outperforming SBPs; (2) empirical convergence curves validate exponential decay of approximation errors. These results position SSBPs as a principal solution for stochastic approximation problems requiring both mathematical rigor and computational adaptability.
- Research Article
- 10.1039/d5cp02537k
- Oct 8, 2025
- Physical chemistry chemical physics : PCCP
- Prutthipong Tsuppayakorn-Aek + 3 more
Exploring emergent phases in monolayer alloy superconductors represents a forefront endeavor in contemporary quantum materials research. Following the successful exploration of AlB2 in a superconducting state, we provide a significant reference for examining superconductivity in Si-substituted AlB2 using first-principles predictions. This noteworthy outcome highlights that Al0.75Si0.25B2 is one of the energetically stable configurations within the Al1-xSixB2 system that exhibits the superconducting state. However, the anharmonic effects on this phase significantly impact its phonon spectra, potentially influencing dynamical stability. In specific cases, the application of the stochastic self-consistent harmonic approximation enables us to capture how thermally induced lattice vibrations impact the equilibrium structure of the material. It is observed that the inclusion of anharmonic corrections brings the predicted superconducting characteristics into closer agreement with those derived from the harmonic model, thereby resolving the issue of imaginary frequencies. As a result, we demonstrate that the Allen-Dynes modified McMillan scheme predicts a critical temperature (Tc) of approximately 15 K. This can be enhanced to 41 K through the utilization of the anisotropic Migdal-Eliashberg theory. Our findings reveal that the role of anharmonicity-arising from minor corrections in the acoustic regime contributed by the high atomic mass-in Al0.75Si0.25B2 theoretically leads to superconductivity, with Tc being consistent with values predicted within the harmonic approximation.
- Research Article
- 10.1021/acs.jpclett.5c02084
- Oct 2, 2025
- The journal of physical chemistry letters
- Shouhang Li + 3 more
Thermal conductivity typically decreases with increasing temperature along the three principal crystalline directions, primarily due to enhanced phonon anharmonicity. In this work, we conducted a comprehensive first-principles investigation of thermal transport in crystalline polyethylene by solving the Wigner transport equation, assisted with the stochastic self-consistent harmonic approximation. It is found that the thermal conductivity of crystalline polyethylene decreases along the chain direction, but increases nearly linearly in the out-of-chain directions. This anomalous contrasting behavior stems from the dominance of particle-like transport along the chain and wave-like transport in the out-of-chain directions. The strong anharmonicity facilitates phonon tunneling between high- and low-frequency modes in the out-of-plane directions. Therefore, further enhancement of thermal conductivity in those directions could benefit from increased anharmonicity and the introduction of additional disorder. These findings provide fundamental insights into the thermal transport mechanisms of anisotropic crystalline polymers, offering valuable guidance for rationally engineering their thermal properties.
- Research Article
- 10.33889/ijmems.2025.10.5.072
- Oct 1, 2025
- International Journal of Mathematical, Engineering and Management Sciences
- Nguyen Dang Diem + 2 more
This study proposes a stochastic finite element method (SFEM) for analyzing the static response of beams with material properties modeled as three-dimensional spatial random fields. The method employs weighted integration to discretize spatial variations in Young’s modulus and utilizes a perturbation approach for efficient statistical response computation. Validation is performed using Monte Carlo simulations (MCs) with the spectral representation method to establish a benchmark dataset, showing strong agreement between the two methods, particularly for large correlation distances. The results demonstrate that spatial variability in Young’s modulus significantly affects beam displacement. Shorter correlation lengths reduce displacement variability, while longer correlation lengths lead to greater deflection dispersion. Additionally, an enhancement in the standard deviation of Young's elastic modulus correlates with a higher coefficient of variation (COV) of displacement, confirming structural sensitivity to material randomness. The COV of displacement shows a nearly proportional relationship to the COV of Young’s modulus, which provides key insights into the predictability of stochastic structural behavior. While SFEM is computationally more efficient than MCs, its first-order perturbation formulation limits accuracy in highly nonlinear cases. Future work should explore higher-order stochastic approximations, non-Gaussian random fields, and nonlinear extensions. These findings contribute to advancing stochastic structural analysis by extending SFEM to 3D random fields, providing a foundation for uncertainty quantification in engineering design and highlighting the importance of spatially varying material properties.
- Research Article
- 10.3390/a18100622
- Oct 1, 2025
- Algorithms
- Satya Dev Pasupuleti + 1 more
Cancer classification using high-dimensional genomic data presents significant challenges in feature selection, particularly when dealing with datasets containing tens of thousands of features. This study presents a new application of the Simultaneous Perturbation Stochastic Approximation (SPSA) method for feature selection on large-scale cancer datasets, representing the first investigation of the SPSA-based feature selection technique applied to cancer datasets of this magnitude. Our research extends beyond traditional SPSA applications, which have historically been limited to smaller datasets, by evaluating its effectiveness on datasets containing 35,924 to 44,894 features. Building upon established feature-ranking methodologies, we introduce a comprehensive evaluation framework that examines the impact of varying proportions of top-ranked features (5%, 10%, and 15%) on classification performance. This systematic approach enables the identification of optimal feature subsets most relevant to cancer detection across different selection thresholds. The key contributions of this work include the following: (1) the first application of SPSA-based feature selection to large-scale cancer datasets exceeding 35,000 features, (2) an evaluation methodology examining multiple feature proportion thresholds to optimize classification performance, (3) comprehensive experimental validation through comparison with ten state-of-the-art feature selection and classification methods, and (4) statistical significance testing to quantify the improvements achieved by the SPSA approach over benchmark methods. Our experimental evaluation demonstrates the effectiveness of the feature selection and ranking-based SPSA method in handling high-dimensional cancer data, providing insights into optimal feature selection strategies for genomic classification tasks.
- Research Article
- 10.1080/02331888.2025.2562301
- Sep 30, 2025
- Statistics
- Valentin Konakov + 2 more
The Robbins-Monro algorithm is a recursive, simulation-based stochastic procedure to approximate the zeros of a function that can be written as an expectation. It is known that under some technical assumptions, Gaussian limit theorems approximate the stochastic performance of the algorithm. Here, we are interested in strong approximations for Robbins-Monro procedures. The main tool for getting them are local limit theorems, that is, studying the convergence of the density of the algorithm. The analysis relies on a version of parametrix techniques for Markov chains converging to diffusions. The main difficulty that arises here is the fact that the drift is unbounded.