Published in last 50 years
Articles published on Approximation Approach
- New
- Research Article
- 10.1080/00207543.2025.2584726
- Nov 7, 2025
- International Journal of Production Research
- Sungwon Hong + 2 more
In this paper, we address a wafer-lot scheduling problem (WLSP) for parallel multi-chamber equipment in semiconductor wafer fabrication lines. This equipment can process multiple wafers simultaneously, with each chamber handling one wafer at a time. To process a wafer, the corresponding lot must be loaded into a cassette module, which remains occupied until all wafers in the lot are processed, thereby limiting cassette module availability. Each wafer lot requires a specific process step, with target production quantities set for each step. The WLSP aims to maximise the fulfilment of these target quantities within the planning horizon while satisfying operational constraints. We show that the WLSP is strongly NP-hard and present a mixed integer programming (MIP) model based on a two-level tour structure. To enhance performance, we derive valid inequalities that strengthen the linear programming (LP) relaxation bounds of the model. In addition, we propose approximate dynamic programming (ADP) algorithms incorporating several bounding methods. Specifically, we devise three models to compute upper bounds and introduce a roll-out heuristic to obtain lower bounds. The proposed algorithms are particularly effective for larger instances. Among the upper bounding methods, the LP model with the proposed valid inequalities consistently provides the tightest bounds, demonstrating its effectiveness.
- New
- Research Article
- 10.1016/j.eswa.2025.128582
- Nov 1, 2025
- Expert Systems with Applications
- Virbon B Frial + 2 more
Solving the military medical evacuation dispatching, preemptive rerouting, redeploying, and delivering problem via tree-based machine learning and approximate dynamic programming approaches
- New
- Research Article
- 10.1016/j.conengprac.2025.106457
- Nov 1, 2025
- Control Engineering Practice
- Bassem Boukhebouz + 3 more
Model identification of a cable-based mechanical transmission for robotics using a Best Linear Approximation approach
- New
- Research Article
- 10.1016/j.ejor.2025.11.004
- Nov 1, 2025
- European Journal of Operational Research
- Hoang Giang Pham + 1 more
Constrained Assortment Optimization under the Mixed-Logit Model: Approximation Schemes and Outer Approximation Approaches
- New
- Research Article
- 10.1186/s12889-025-24931-2
- Oct 28, 2025
- BMC Public Health
- Weijia Wang + 9 more
Over the past decade, multiple outbreaks of hand, foot, and mouth disease (HFMD) have occurred in East Asia, especially in China. It is crucial to understand the distribution pattern and risk factors of HFMD while also studying the corresponding characteristics of reinfections. This paper aims to jointly analyze the spatiotemporal distribution and influential factors of primary infection and reinfection of HFMD in Jiangsu province, China, under the Bayesian framework. Using county-level monthly HFMD counts from 2009 to 2023, we proposed four spatiotemporal hierarchical models with latent effects shared in the reinfection sub-model to evaluate the influence of air pollution, meteorological factors, and demographic characteristics on primary infection and reinfection of HFMD. The integrated nested Laplace approximation (INLA) approach estimated model parameters and quantified the spatial and temporal random effects. The optimal model with spatial, temporal, and spatiotemporal interaction effects indicated a significant positive influence of NO_2, wind speed, relative humidity, and solar radiation, as well as a significant negative effect of PM_{2.5}, O_{3}, temperature above 27 °C, precipitation, and COVID-19, on both infections. Scattered status and critical primary infection have a significant positive effect on primary infection and reinfection. Positive sharing coefficients revealed similar spatiotemporal patterns of both types of infection. Beyond these fixed effects, non-linear analyses further highlighted comparable exposure-response patterns for primary infection and reinfection, with notable thresholds around 50–70 upmug/m^3 for PM_{2.5}, 90 upmug/m^3 for O_{3}, 80 upmug/m^3 for NO_{2}, 20 °C for temperature, and 55% for relative humidity. Our findings reveal distinct drivers of primary infection and reinfection but also similar spatiotemporal patterns, suggesting the need for both targeted protection of vulnerable groups and integrated surveillance strategies.Supplementary InformationThe online version contains supplementary material available at 10.1186/s12889-025-24931-2.
- New
- Research Article
- 10.1080/03610926.2025.2571663
- Oct 22, 2025
- Communications in Statistics - Theory and Methods
- Summaira Manzoor + 2 more
This study aims to develop and compare the maximum likelihood, least square, weighted least square, Anderson- Darling, right-tail Anderson-Darling, Cramér-von Mises, maximum of product spacings, and Bayesian estimators for the Marshall-Olkin inverse log-logistic distribution. The Bayesian estimators are developed based on the reference and Jeffreys priors. Since closed-form Bayesian estimators cannot be obtained, an approximate Bayesian approach is proposed using the idea of Laplace’s approximation, and results are compared via Monte Carlo simulations. Additionally, three real-life data sets are analyzed for illustration. The study shows that the Bayesian estimators based on reference and Jeffreys priors outperform their counterparts in terms of mean squared errors.
- Research Article
- 10.1088/1674-1137/ae1196
- Oct 10, 2025
- Chinese Physics C
- Ri-Guang Huang + 2 more
Abstract We perform the calculation of nuclear matrix elements for the neutrinoless double beta decays under a Left-Right symmetric model mediated by light neutrino, and we adopt the spherical quasi-particle random-phase approximation (QRPA) approach with realistic force. For eight nuclei: $^{76}$Ge, $^{82}$Se, $^{96}$Zr, $^{100}$Mo, $^{116}$Cd, $^{128}$Te, $^{130}$Te and $^{136}$Xe, related nuclear matrix elements are given. We analyze each term and the details of contributions of different parts are also given.
For the $q$ term, we find that the weak-magnetism components of the nucleon current contribute equally as other components such as axial-vector. 
We also discuss the influence of short-range correlations on these NMEs. It is found that $R$ term are more sensitive to the short range correlation than other terms due to the large portion of the contribution from high exchange momenta.
- Research Article
- 10.1007/s11067-025-09700-3
- Oct 8, 2025
- Networks and Spatial Economics
- Rongrong Guo + 4 more
Abstract The demand adaptive paired-line hybrid transit (DAPL-HT) system is a public transit system that integrates a fixed route transit (FRT) service with a demand-responsive transit (DRT) service by operating the DRT service with a stable headway along a paired fixed-route line. Hybrid services of this kind attempt to address the issues of high per-capita operating cost and low occupancy, which are often observed in DRT services. Research on hybrid services and, in particular, DAPL-HT systems has typically focused on optimal network design (e.g., line spacing and headways) under exogenous demand without explicitly considering transit fares. In this paper, to facilitate the integrated design of these services from a transit agency’s perspective, we address the problem of simultaneously determining transit fares and headways for the DAPL-HT system with a radial network structure. A continuous approximation approach is adopted, and the problem is formulated as a nonlinearm optimization model to minimize the generalized cost subject to a revenue constraint. Demand elasticity is represented with a logit-based elastic demand function and passengers' choice of different access/egress modes in the DAPL-HT system is captured using a logit model. Numerical results suggest that the optimal fares for both FRT and DRT do not vary monotonically with demand density and that the DAPT-HL system can be designed to be self-financing without a significant loss in ridership at moderate and high demand levels.
- Research Article
- 10.3390/e27101041
- Oct 7, 2025
- Entropy
- Piotr Bania + 1 more
The design of informatively rich input signals is essential for accurate system identification, yet classical Fisher-information-based methods are inherently local and often inadequate in the presence of significant model uncertainty and non-linearity. This paper develops a Bayesian approach that uses the mutual information (MI) between observations and parameters as the utility function. To address the computational intractability of the MI, we maximize a tractable MI lower bound. The method is then applied to the design of an input signal for the identification of quasi-linear stochastic dynamical systems. Evaluating the MI lower bound requires the inversion of large covariance matrices whose dimensions scale with the number of data points N. To overcome this problem, an algorithm that reduces the dimension of the matrices to be inverted by a factor of N is developed, making the approach feasible for long experiments. The proposed Bayesian method is compared with the average D-optimal design method, a semi-Bayesian approach, and its advantages are demonstrated. The effectiveness of the proposed method is further illustrated through four examples, including atomic sensor models, where input signals that generate a large amount of MI are especially important for reducing the estimation error.
- Research Article
- 10.1080/02331934.2025.2569782
- Oct 7, 2025
- Optimization
- Ouayl Chadli + 3 more
Dynamical systems have inspired and explained several accelerated algorithms for a wide range of optimization problems. But due to the lack of smoothness and convexity of the objective functions in many real world applications, we cannot directly apply these accelerated algorithms in these situations. This paper proposes to apply a smoothing approximation approach to address non-smooth, non-convex machine learning optimization problems. Our work is motivated by the following goal: developing a direct method for finding critical points of objective functions of machine learning problems where functions are known to be non-smooth and non-convex. To achieve these goals, we establish the convergence of an alternative algorithm for smooth functions without convexity that supplements some recent results of Attouch et al.
- Research Article
- 10.1016/j.enganabound.2025.106308
- Oct 1, 2025
- Engineering Analysis with Boundary Elements
- Rafael Castro Mota + 4 more
An approximate hybrid approach for computing low-frequency outdoor sound propagation
- Research Article
- 10.1029/2025jb031877
- Oct 1, 2025
- Journal of Geophysical Research: Solid Earth
- Benedikt Braszus + 2 more
Abstract We present the first crustal 3D P ‐ and S ‐wave velocity model covering the entire Greater Alpine region (GAR) based on Local Earthquake tomography. Applying the deep neural network PhaseNet to broad‐band waveforms from 989 stations in the GAR between 2016 and 2022, we determined 173,841 P ‐ and 68,967 S ‐phase onsets from 2,553 events with 1.5 recorded at epicentral distances up to 1,000 km. With the SIMUL2023 algorithm we simultaneously relocate the seismicity and invert for 3D velocity structure using an approximate bayesian approach to minimize the influence of the starting model. The excellent overall agreement with previously published geophysical transects gives us the confidence that our new 3D model is representative for the whole Alpine region. We find a consistent decrease in P ‐ and S ‐wave velocity at mid crustal depths beneath the Western, Central and Eastern arc which we name the Alpine Mid Crustal Low Velocity (AMCLV) anomaly. The AMCLV is terminated sharply by the Dolomites indenter east of the Giudicarie line and is visible again east of 12E, but considerably less pronounced in the Eastern Alps. Due to its partial connection to the upper crust in the European foreland we interpret this material to be formerly part of the European upper crust which has been stacked during the collision process. Based on 1D velocity profiles we suggest that the southern part of the Dolomites indenter consists mainly of undeformed northward dipping Adriatic mantle, whereas the northern part shows a thickened lower crust maybe caused by Permian intrusions.
- Research Article
- 10.1016/j.jenvman.2025.127033
- Oct 1, 2025
- Journal of environmental management
- Zhuo Chen + 9 more
Real-time field measurements of bioaerosols in the agricultural environment: Concentrations, components and environmental impacts.
- Research Article
- 10.47392/irjaeh.2025.0560
- Oct 1, 2025
- International Research Journal on Advanced Engineering Hub (IRJAEH)
- Madhavilata Mangipudi + 3 more
Vector Databases (VDBs) are now an essential building block in Natural Language Processing (NLP), facilitating the efficient storage and retrieval of high-dimensional semantic embeddings. Linear algebra is at the core of these systems, where matrix operations form the basis of embedding creation, similarity computation, and indexing. We discuss the mathematical underpinnings of VDBs from a matrix-based formulation in this paper. We show how similarity measures like cosine similarity and distance metrics like Euclidean and Mahalanobis distances drive nearest-neighbor retrieval. With FAISS for indexing and Gradio for prototyping, we introduce a comparative examination of exact vs. approximate search approaches in NLP retrieval tasks. Our work is a hybrid approach that unites mathematical precision with practical assessment, closing the gap between abstract matrix derivations and interactive use of VDBs.
- Research Article
- 10.1063/5.0292765
- Oct 1, 2025
- Physics of Plasmas
- G V Naidis + 1 more
An approximate approach to estimating the average velocity and radius of streamers generated at the fronts of applied voltage pulses is presented. It is shown that this approach gives the dependence of the average streamer parameters on the value of the average applied electric field consistent with results of numerical streamer simulations.
- Research Article
- 10.33889/ijmems.2025.10.5.061
- Oct 1, 2025
- International Journal of Mathematical, Engineering and Management Sciences
- Anik Anekawati + 2 more
Research involving causal relationships among latent variables that generally use structural equation modeling (SEM) analysis and spatial data simultaneously has an unavoidable impact on using spatial SEM analysis. A region-based spatial SEM model was developed for cases that include spill-over effects across regions. Spatial weights in the inner model offered flexibility and were more informative than conventional techniques. Therefore, this research developed a model that involved multivariate data, latent variables, and spatial data, specifically spatial autoregressive, in the form of a multivariate spatial autoregressive model that involves latent variables (MSAR-VLs). This development integrated latent variable estimation, error distribution of the model, parameter estimation, spatial dependency testing, and partial parameter hypothesis testing, which used the weighted least squares (WLS), the properties of expected and variance value, maximum likelihood estimation (MLE) method, Lagrange multiplier (LM), and Wald test methods, respectively. The results of the MSAR-LVs model development were applied to the economic growth modeling for regencies in East Java, Indonesia. The research findings on developing the MSAR-LVs model are as follows: the error vector distribution in the MSAR-LVs model conformed to a normal distribution. Its mean value was the estimated vector of the exogenous variables. Meanwhile, the standard deviation was represented by a matrix derived from the Kronecker product between the inverse of a diagonal matrix containing the parameters of the outer model for the exogenous variables and the identity matrix. The parameter estimators did not have a closed-form solution; therefore, they were estimated using a numerical approximation approach based on the concentrated log-likelihood function. The estimator matrix that yielded the maximum log-likelihood value was selected. Under the null hypothesis, the LM and Wald test statistics conformed to a chi-square distribution with one degree of freedom. The other findings indicated that the economic growth model had a significant and negative spatial autoregressive coefficient, while the coefficient for the socio-demographic model was not significant. Additionally, human capital exerted a significant effect on economic growth, illustrating that each regency experienced a negative spill-over effect on economic growth from neighboring regencies, influenced by the present human capital in those regions.
- Research Article
- 10.1080/01621459.2025.2529027
- Sep 26, 2025
- Journal of the American Statistical Association
- Zheng Zhou + 3 more
The Shapley value is a well-known concept in cooperative game theory that provides a fair way to distribute revenues or costs among players. It has found applications in many fields besides economics, such as marketing and biology. Recently, it has been widely applied in data science for data quality evaluation and model interpretation. However, the computation of the Shapley value is an NP-hard problem. For a cooperative game with n players, calculating Shapley values for all players requires evaluating the values for 2 n different coalitions, which makes it infeasible for large n. In this article, we reveal the connection between cooperative games and two-level factorial experiments. For any coalition, each player’s participation status can be represented as a two-level factor, while the coalition value can be viewed as the expected response of an experimental trial under the corresponding factor level combination. Building on this connection, we derive a factorial-effect representation of the Shapley value and propose a fast approximation approach based on a newly proposed fractional factorial design. Under certain conditions, our approach can obtain true Shapley values by evaluating values of fewer than 4 n 2 − 4 different coalitions. Generally, highly accurate approximations of Shapley values can also be obtained by evaluating values of additional O ( n 2 ) coalitions. Multiple simulations and real case examples demonstrate that, with equivalent computational cost, our method provides significantly more accurate approximations than several popular methods. Supplementary materials for this article are available online, including a standardized description of the materials available for reproducing the work.
- Research Article
- 10.1007/s11222-025-10734-3
- Sep 25, 2025
- Statistics and Computing
- Anna De Magistris + 2 more
Abstract In this work, we introduce a novel approach for functional data approximation based on generalized P-splines with non-uniform and adaptively placed knots. The key innovation of our proposal is the integration of a conditioning-aware strategy for selecting both the number and positions of the knots, as well as the regularization parameters. By reformulating the Tikhonov regularization problem, we propose a computationally efficient criterion that controls model complexity while ensuring numerical stability. The resulting approximation framework not only improves the fit across the entire functional domain but also maintains compactness and robustness. Extensive numerical experiments conducted on both synthetic and real-world datasets demonstrate that our approach significantly outperforms traditional free knot and smoothing spline methods in terms of approximation error and conditioning.
- Research Article
- 10.3847/1538-4357/adf6bb
- Sep 23, 2025
- The Astrophysical Journal
- Ruihan Wang + 7 more
Abstract A data-driven approach to improve the accuracy of state-resolved molecular collision data for the relaxation of vibrationally and rotationally excited molecules is developed. We apply the approach to the SiO–H2 collision system, which is crucial for reliable astrophysical environment modeling of SiO emission. The most accurate quantum scattering calculation method employs the coupled-channel formalism, which is computationally expensive. Approximate approaches, generally referred to as decoupling approximations, are significantly more computationally efficient, but less accurate. To bridge this accuracy gap, we use a multilayer perceptron to predict the difference between the accurate and approximate results. Our model is trained on a carefully selected and preprocessed data set, which includes both approximate and accurate calculation results. We show that our approach significantly improves the accuracy of the approximate calculation results, as the relative rms errors from the predicted results are orders of magnitude smaller than those from the approximate calculation results, benchmarked against the accurate calculations. The present approach can be applied to similar problems in other fields to improve the accuracy of computationally efficient approximate methods, reducing computation time and enhancing automation.
- Research Article
- 10.3390/s25185820
- Sep 18, 2025
- Sensors (Basel, Switzerland)
- Baiyi Li + 2 more
Maritime Internet of Things (IoT) with unmanned surface vessels (USVs) faces tight onboard computing and sparse wireless links. Compute-intensive vision and sensing workloads often exceed latency budgets, which undermines timely decisions. In this paper, we propose a novel distributed computation offloading framework for maritime IoT scenarios. By leveraging the limited computational resources of USVs within a device-to-device (D2D)-assisted edge network and the mobility advantages of UAV-assisted edge computing, we design a breadth-first search (BFS)-based distributed computation offloading game. Building upon this, we formulate a global latency minimization problem that jointly optimizes UAV hovering coordinates and arrival times. This problem is solved by decomposing it into subproblems addressed via a joint Alternating Direction Method of Multipliers (ADMM) and Successive Convex Approximation (SCA) approach, effectively reducing the time between UAV arrivals and hovering coordinates. Extensive simulations verify the effectiveness of our framework, demonstrating up to a 49.6% latency reduction compared with traditional offloading schemes.