Published in last 50 years
Articles published on Penalty Term
- New
- Research Article
- 10.1002/mma.70285
- Nov 6, 2025
- Mathematical Methods in the Applied Sciences
- Wansheng Wang + 3 more
ABSTRACT In this paper, we study stability and convergence of variable step‐size IMEX BDF2 method for solving numerically nonlinear partial integro‐differential equations (PIDEs) with a penalty term which describe the jump–diffusion American option pricing model in finance. To avoid the full matrix due to the discretization of nonlocal integral operators, we explicitly discretize the integral operator and implicitly discretize the rest of the operators. The monotonicity of the nonlinear operator plays key roles in showing the stability of the variable step‐size IMEX BDF2 method for abstract nonlinear PIDEs. Based on this stability result, the global error bounds for the variable step‐size IMEX BDF2 method are provided. By combining fixed‐point iteration and finite difference method for spatial discretization, the nonlinear PIDEs are effectively solved. Numerical results illustrate the effectiveness of the proposed method for American options under jump–diffusion models.
- New
- Research Article
- 10.1002/nme.70168
- Nov 5, 2025
- International Journal for Numerical Methods in Engineering
- Leonard Freisem + 4 more
ABSTRACT This work presents a new inverse algorithm based on magnetic tomography (MT) for diagnosing fuel‐cell stacks (FC‐stacks). The objective is to determine the local internal resistivities of the stack from external magnetic measurements. The inverse problem is solved by minimizing the difference between the simulated magnetic induction and the measured magnetic induction using the gradient descent method. To limit the search space, a neighbor‐regularization is implemented as a penalty term, and the stack voltage is added as a constraint. To accelerate the solving process and improve the accuracy, we provide the derivatives of our optimization problem. This involves calculating the derivative of the employed finite element method (FEM) model, where sensitivities are determined using the adjoint‐state method. To validate our novel approach, the algorithm is applied to numerical data and subsequently tested it on measurements from a real FC‐stack, presenting the results in this article.
- New
- Research Article
- 10.1080/17445760.2025.2579546
- Nov 5, 2025
- International Journal of Parallel, Emergent and Distributed Systems
- Yutaro Nakahara + 4 more
Given an undirected graph, a matching is a set of edges with no shared vertices. A maximal matching cannot be extended by adding edges without violating this condition. The minimum maximal matching problem seeks the smallest such matching. While it can be formulated as a Quadratic Unconstrained Binary Optimization (QUBO) problem, existing methods require complex penalty terms. This paper presents a novel, penalty-free QUBO formulation that is simpler and more intuitive. We analyze its theoretical properties and evaluate its performance using several QUBO solvers. Our approach enhances the practicality of QUBO-based optimization for combinatorial graph problems.
- New
- Research Article
- 10.1080/23249935.2025.2583461
- Nov 5, 2025
- Transportmetrica A: Transport Science
- Maocan Song + 2 more
We study an uncapacitated, multi-commodity network design problem with a construction budget constraint and a concave objective function. Instead of minimising the expected travel time across all edges, the objective minimises jointly the travel times that are standard deviation above the expected travel time of each commodity. The idea is that the decision-maker wants to minimise the travel times not only on average, but also to keep their variability as small as possible. The proposed mean-standard deviation network design model is a nonlinear and concave integer programme. This study proposes two novel methods which are Lagrangian relaxation (LR) and augmented Lagrangian relaxation to tackle this problem. The quadratic penalty terms are extended to the LR, and the alternating direction method of multipliers is introduced to decompose the ALR into routing and construction optimisations. Some computational results on three networks are present, showing that these methods achieve good integrality gaps.
- New
- Research Article
- 10.1088/2632-2153/ae1b71
- Nov 4, 2025
- Machine Learning: Science and Technology
- Shota Deguchi + 1 more
Abstract Physics-informed neural networks have attracted significant attention in scientific machine learning for their capability to solve forward and inverse problems governed by partial differential equations. However, the accuracy of PINN solutions is often limited by the treatment of boundary conditions. Conventional penalty-based methods, which incorporate boundary conditions as penalty terms in the loss function, cannot guarantee exact satisfaction of the given boundary conditions and are highly sensitive to the choice of penalty parameters. This paper demonstrates that distance functions, specifically R-functions, can be leveraged to enforce boundary conditions, overcoming these limitations. R-functions provide normalized distance fields, enabling flexible representation of boundary geometries, including non-convex domains, and facilitating various types of boundary conditions. Nevertheless, distance functions alone are insufficient for accurate inverse analysis in PINNs. To address this, we propose an integrated framework that combines the normalized distance field with bias-corrected adaptive weight tuning to improve both accuracy and efficiency. Numerical results show that the proposed method provides more accurate and efficient solutions to various inverse problems than penalty-based approaches, even in the presence of non-convex geometries with complex boundary conditions. This approach offers a reliable and efficient framework for inverse analysis using PINNs, with potential applications across a wide range of engineering problems.
- New
- Research Article
- 10.3390/electronics14214324
- Nov 4, 2025
- Electronics
- Theodoros Ι Maris + 2 more
Ensuring the resilience and efficiency of modern distribution networks is increasingly critical in the presence of distributed energy resources (DERs). This study presents a multi-objective optimization framework based on a Genetic Algorithm (GA) to improve voltage profiles, minimize active power losses, and enhance resilience in a radial distribution network. A simplified 6-bus radial test system with DERs at buses 2, 3, and 4 is considered as a proof-of-concept case study. The GA optimizes control variables, including DER setpoints and network reconfiguration, under operational and thermal constraints. The optimization employs a weighted objective function combining voltage profile improvement, loss minimization, and a resilience penalty term that accounts for bus voltage collapse and branch overloads during DER contingencies. Simulation results demonstrate that the GA significantly improves network performance: the minimum bus voltage rises from 0.92 pu to 0.97 pu, while the total real power losses decrease by 46% (from 55.3 kW to 29.7 kW). Moreover, in the event of a DER outage, the optimized configuration preserves 100% load delivery, compared to 89% in the base case. These findings confirm that GA is an effective and practical tool for enhancing distribution network operation and resilience under high DER penetration. Future work will extend the approach to larger IEEE benchmark systems and time-series scenarios.
- New
- Research Article
- 10.1002/cjce.70140
- Nov 3, 2025
- The Canadian Journal of Chemical Engineering
- He Li + 2 more
Abstract The inherent time‐lag effects, nonlinear dependencies, and dynamic coupling mechanisms in desulphurization processes pose significant challenges to precise quality prediction and proactive control. This study presents a novel collaborative optimization framework integrating causal inference with temporal feature engineering to achieve dynamic early warning and intelligent control. Methodologically, we first develop a multi‐modal time‐lag estimation approach combining dynamic time warping, Granger causality tests, and time‐delayed mutual information, resolving temporal asynchrony between process variables through dynamic programming and information‐theoretic analysis. Building upon this, a dynamic autoregressive latent variable model (DALM) with Bayesian estimation is established to capture cross‐variable interaction dynamics, enhanced by a long short‐term memory (LSTM)‐based architecture with attention mechanisms for nonlinear temporal dependency modelling. The proposed early‐warning system synergizes anomaly detection (isolation forest/one‐class SVM/autoencoder triad) with NOTEARS‐optimized causal graphs, achieving 93.1% prediction accuracy (AUC = 0.98) through multi‐feature fusion of temporal patterns, causal drivers, and multivariate anomalies. For control optimization, formulate a hybrid MPC strategy incorporating warning‐adaptive penalty terms, demonstrating 89.2% warning probability alignment with quality deviations while the concentrations of SO 2 /H 2 S are kept within a fixed range through restricted reactor feed flow adjustment. Validations confirm the framework reduces unplanned shutdowns by 37% compared to conventional PID control, with R 2 = 0.9144 for SO 2 and 0.9114 for H 2 S concentration predictions. This work provides a systematic solution addressing temporal‐causal decoupling challenges in complex chemical processes, significantly advancing intelligent optimization in pollution control systems.
- New
- Research Article
- 10.3390/ijtpp10040038
- Nov 3, 2025
- International Journal of Turbomachinery, Propulsion and Power
- Stefan Fraas + 2 more
Different strategies to include structural mechanical aspects in the design process of hydraulic machines are compared. Therefore, an axial turbine’s runner blade is optimized using evolutionary algorithms. Four different setups with a scalar objective function are investigated. In the first two setups, structural mechanical aspects are added to the optimization process as a constraint, once with a penalty term and once with a modified selection operator. If structural mechanical aspects are considered as a constraint, the risk of a premature convergence increases. For this reason, additionally, two setups including the minimization of the maximum stress as an objective within a scalar objective function are analyzed. Furthermore, a multi-objective optimization with resolution of the Pareto front is performed. The differences in the results regarding fitness between the setups using a scalar objective function are small. However, the best result is found for a setup where the minimization of the stress is added as an objective. This demonstrates the risk of a premature convergence involved with constraint handling strategies. The worst result is found for the multi-objective optimization with resolution of the Pareto front, most likely due to a less directed search.
- New
- Research Article
- 10.1080/07350015.2025.2583457
- Nov 1, 2025
- Journal of Business & Economic Statistics
- Jingxuan Luo + 2 more
Regularization methods are crucial for the analysis of high-dimensional data, with most methods adding a penalty term to the loss function explicitly. In this paper, we introduce an implicit regularization technique through over-parameterization and propose a calibrated penalty-free (CPF) estimation method for high-dimensional linear errors-in-variables models. This method calibrates the bias caused by measurement errors while avoiding the bias introduced by penalty terms. We use the gradient descent algorithm to minimize the over-parameterized calibrated loss function without penalties, resulting in a sparse estimate of regression coefficients and thus achieving implicit regularization. Furthermore, a novel bootstrap methodology is introduced to address the challenge posed by the unknown covariance matrix of measurement errors. The oracle inequality for estimation error is established under certain regularity conditions. Extensive simulation studies and a real data analysis illustrate the competitive finite sample performance of the proposed method.
- New
- Research Article
- 10.3390/axioms14110812
- Oct 31, 2025
- Axioms
- Muteb Faraj Alharthi
Multicollinearity among predictor variables is a common challenge in modeling chemical and environmental datasets in physical sciences, often leading to collinearity issues and unreliable parameter estimates when fitting regression models. Ridge regression method emerges as an effective solution to this problem by introducing a penalty term (k) that shrinks parameters to mitigate multicollinearity and balances bias and variance. In this study, we propose three novel shrinkage parameters for ridge regression and use them to develop three ridge-type estimators, referred to as SPS1, SPS2, and SPS3, which are designed to enhance parameter estimation based on sample size (n), number of predictors (p), and standard error (σ). These shrinkage estimators aim to improve the accuracy of regression models in the presence of multicollinearity. To evaluate the performance of the SPS estimators, we conduct comprehensive Monte Carlo simulations comparing them to ordinary least squares (OLS) and other existing estimators based on the mean squared error (MSE) criteria. The simulation results demonstrate that the SPS estimators outperform OLS and other methods. Additionally, we apply these three shrinkage estimators to two real-world environmental and chemical datasets, showing their ability to address multicollinearity as compared to OLS and other estimators. The proposed SPS estimators offer more stable and accurate regression results, contributing to improved decision-making in environmental modeling, pollution analysis, and other scientific research involving correlated variables.
- New
- Research Article
- 10.1080/00295639.2025.2575538
- Oct 30, 2025
- Nuclear Science and Engineering
- Suo-Yi Xiang + 4 more
In emergency scenarios such as nuclear radiation accidents, mobile robots are often deployed for tasks including detection, search and rescue, and environmental assessment. To mitigate radiation-induced damage to onboard components, the cumulative radiation dose is commonly incorporated as an optimization objective in path planning. However, the highly uncertain radiation field and the frequent emergence of unknown obstacles during such incidents pose significant challenges for traditional path planning methods, which struggle to generate optimal paths and implement efficient, reliable obstacle avoidance strategies. To address these issues, this study proposes a real-time path planning method for radiation environments [Real-Time A* algorithm with Dynamic Window Approach (RTAD)] based on an improved A* algorithm combined with the Dynamic Window Approach (DWA). The method jointly optimizes the cost function of the A* algorithm by integrating path length, cumulative radiation dose, and energy consumption, enhanced through a multiscale evaluation mechanism to improve planning accuracy. Additionally, the improved DWA algorithm incorporates a radiation-aware evaluation function, adaptive weight adjustment strategy, and speed variation penalty term, enabling effective real-time obstacle avoidance in complex nuclear radiation environments. Simulation results demonstrate that compared to the conventional A*-DWA approach, the proposed RTAD algorithm reduces cumulative radiation dose by approximately 23.1%, increases average movement speed by 28.6%, and decreases planning time by 23.4%.
- New
- Research Article
- 10.1371/journal.pone.0335368
- Oct 28, 2025
- PLOS One
- Jiandong Qiu + 4 more
Fault detection in high-speed train wheelset bearings is paramount for ensuring operational safety. However, the scarcity of fault samples limits the accuracy of traditional detection methods. To address this challenge, this paper proposes a supervised generative model that integrates an Improved Auxiliary Classifier Generative Adversarial Network (IACGAN) with a Variational Auto-Encoder (VAE). Firstly, the method employs the VAE as the generator, introducing latent variables with prior information to optimize the encoding and generation process; Secondly, an independent classifier network is integrated into the ACGAN framework to enhance compatibility between classification and discriminative capabilities. Concurrently, a loss function incorporating Wasserstein distance and gradient penalty terms is designed to prevent gradient vanishing during training while satisfying Lipschitz constraints, thereby improving model stability. Experiments conducted on the XJTU bearing dataset validate that samples generated by the proposed method demonstrate superior quality assessment at both the data and feature levels compared to several GAN variants. Furthermore, the constructed VAE-IACGAN-CNN detection model achieves an average classification accuracy of 88.04%, representing a maximum improvement of 15.17% over comparative methods. This significantly mitigates accuracy degradation caused by sample imbalance, demonstrating the proposed approach’s efficacy in resolving low fault detection accuracy stemming from imbalanced high-speed train wheelset bearing samples.
- New
- Research Article
- 10.3390/s25216573
- Oct 25, 2025
- Sensors
- Ruizhi Zhang + 5 more
Real-time object detection in Unmanned Aerial Vehicle (UAV) imagery is critical yet challenging, requiring high accuracy amidst complex scenes with multi-scale and small objects, under stringent onboard computational constraints. While existing methods struggle to balance accuracy and efficiency, we propose RTUAV-YOLO, a family of lightweight models based on YOLOv11 tailored for UAV real-time object detection. First, to mitigate the feature imbalance and progressive information degradation of small objects in current architectures multi-scale processing, we developed a Multi-Scale Feature Adaptive Modulation module (MSFAM) that enhances small-target feature extraction capabilities through adaptive weight generation mechanisms and dual-pathway heterogeneous feature aggregation. Second, to overcome the limitations in contextual information acquisition exhibited by current architectures in complex scene analysis, we propose a Progressive Dilated Separable Convolution Module (PDSCM) that achieves effective aggregation of multi-scale target contextual information through continuous receptive field expansion. Third, to preserve fine-grained spatial information of small objects during feature map downsampling operations, we engineered a Lightweight DownSampling Module (LDSM) to replace the traditional convolutional module. Finally, to rectify the insensitivity of current Intersection over Union (IoU) metrics toward small objects, we introduce the Minimum Point Distance Wise IoU (MPDWIoU) loss function, which enhances small-target localization precision through the integration of distance-aware penalty terms and adaptive weighting mechanisms. Comprehensive experiments on the VisDrone2019 dataset show that RTUAV-YOLO achieves an average improvement of 3.4% and 2.4% in mAP50 and mAP50-95, respectively, compared to the baseline model, while reducing the number of parameters by 65.3%. Its generalization capability for UAV object detection is further validated on the UAVDT and UAVVaste datasets. The proposed model is deployed on a typical airborne platform, Jetson Orin Nano, providing an effective solution for real-time object detection scenarios in actual UAVs.
- New
- Research Article
- 10.1371/journal.pone.0335072
- Oct 24, 2025
- PLOS One
- Muhammad Luqman + 3 more
The problem of ill-conditioned data or multicollinearity is common in regression modelling. The problem results in imprecise parameter estimation which leads to inability of gauging true impact of explanatory variables on the response. Also, due to strong multicollinearity, standard errors of parameter estimates get inflated leading to wider confidence intervals and hence increased risk of type-II error. To handle the problem, different approaches have been proposed in literature. Primarily, such techniques penalize the coefficient estimates in one way or other. Ridge regression is one of the most applied among such techniques. In ridge regression, a penalty term is added in the objective function of the general linear model. That penalty term introduces a small amount of bias in parameter estimates with an objective to decrease the mean square error. In the current article, some new choices for ridge constant are proposed. The performance of proposed ridge choices are compared through Monte Carlo simulations under different scenarios, using mean square error as measure of performance. The simulation results indicate that the proposed ridge estimator performs better than existing ridge constants, in most cases catering for severity of multicollinearity, number of explanatory variables, sample size and error variance structure. The simulation results were further corroborated by comparing performance of proposed ridge penalties using two real-life applications.
- New
- Research Article
- 10.4208/eajam.2024-198.060425
- Oct 23, 2025
- East Asian Journal on Applied Mathematics
- Ting Wei + 2 more
This paper is devoted to identifying the order of time fractional derivative and a space-dependent source term in a time fractional diffusion-wave equation from some additional measured data in a subdomain or on a subboundary with a small time period. The Lipschitz continuity of forward operators mapping the unknown order and source term into the given data are established based on the stability estimates of solution for the direct problem. We prove the uniqueness of the considered inverse problems by using the asymptotic behavior of the solution at $t$ = 0, the Titchmarsh convolution theorem and the Duhamel principle. Moreover, a Tikhonov-type regularization method is proposed with $H^1$ -norm as a penalty term. The existence of the regularized solution and its convergence to the exact solution under a suitable regularization parameter choice are obtained. Then we employ a linearized iteration algorithm combined with the piecewise linear finite element approximation to find simultaneously the approximate order and space source term. Three numerical examples for one- and two-dimensional cases are tested and the numerical results demonstrate the effectiveness of the proposed method.
- New
- Research Article
- 10.3221/igf-esis.75.10
- Oct 21, 2025
- Frattura ed Integrità Strutturale
- Péter Grubits + 2 more
This paper presents an advanced optimization methodology for truss structures, addressing both elasto-plastic design cases—where plastic deformations are controlled within predefined limits—and purely elastic scenarios, in which inelastic behavior is entirely prevented. Plastic deformations are characterized and quantified using the complementary strain energy of residual forces, providing a reliable measure of inelastic response. Additional design constraints related to load-bearing capacity and structural stability are incorporated through penalization terms, while the objective function focuses on minimizing structural weight. Serviceability restrictions are also applied to enhance the robustness of the design. Building on these considerations, the framework integrates material and geometric nonlinear finite element analysis (FEA) with a genetic algorithm (GA), enabling the automated determination of optimal cross-sectional areas for individual bar members within a predefined design domain. To further enhance safety, geometric imperfections are introduced through an integrated automatic strategy. The effectiveness of the proposed technique was validated using two benchmark numerical examples: a 37-bar planar truss and a 25-bar space truss, evaluated under multiple design scenarios. The results highlight the flexibility and reliability of the optimization framework, demonstrating substantial weight savings while fully meeting all structural performance criteria.
- New
- Research Article
- 10.1017/prm.2025.10075
- Oct 20, 2025
- Proceedings of the Royal Society of Edinburgh: Section A Mathematics
- He Zhang + 2 more
In this paper, we study the existence of solutions to the following Hartree equation \begin{align*} \begin{cases} -\Delta u+\lambda V(x) u+\mu u=\left(\int_{\mathbb{R}^N}\frac{|u|^p}{|x-y|^{N-\alpha}}\right)|u|^{p-2}u,\ \text{in}\ \mathbb{R}^N,\\ \int_{\mathbb{R}^N}|u|^2=\omega, \end{cases} \end{align*} Where $N\geq 3$ , $\omega,\lambda \gt 0$ , $p\in \left(\frac{N+\alpha}{N}, \frac{N+\alpha}{N-2}\right)\setminus\left\{\frac{N+\alpha+2}{N}\right\}$ and µ will appear as a Lagrange multiplier. We assume that $0\leq V\in L^{\infty}_{loc}(\mathbb{R}^N)$ has a bottom $int V^{-1}(0)$ composed of $\ell_0$ $(\ell_{0}\geq1)$ connected components $\{\Omega_i\}_{i=1}^{\ell_0}$ , where $int V^{-1}(0)$ is the interior of the zero set $V^{-1}(0)=\{x\in\mathbb{R}^N| V(x)=0\}$ of V. It is worth pointing out that the penalization technique is no longer applicable to the local sublinear case $p\in \left(\frac{N+\alpha}{N},2\right)$ . Therefore, we develop a new variational method in which the two deformation flows are established that reflect the properties of the potential. Moreover, we find a critical point without introducing a penalization term and give the existence result for $p\in \left(\frac{N+\alpha}{N}, \frac{N+\alpha}{N-2}\right)\setminus\left\{\frac{N+\alpha+2}{N}\right\}$ . When ω is fixed and satisfies $\omega^{\frac{-(p-1)}{-Np+N+\alpha+2}}$ sufficiently small, we construct a $\ell$ -bump $(1\leq\ell\leq \ell_{0})$ positive normalization solution, which concentrates at $\ell$ prescribed components $\{\Omega_i\}^{\ell}_{i=1}$ for large λ. We also consider the asymptotic profile of the solutions as $\lambda\rightarrow\infty$ and $\omega^{\frac{-(p-1)}{-Np+N+\alpha+2}}\rightarrow 0$ .
- New
- Research Article
- 10.30910/turkjans.1757995
- Oct 17, 2025
- Türk Tarım ve Doğa Bilimleri Dergisi
- Tolga Tolun
This study aimed to predict an important biological trait, such as egg albumen height, in the presence of multicollinearity problem using some external quality parameters (egg weight, width, length, shape index, Haugh unit). Although a high coefficient of determination (R²=0.995) was obtained in the multiple regression model generated using the Classical Least Squares Method (LSM), serious multicollinearity problem was detected among the independent variables, negatively impacting the model's reliability. To address this issue, LASSO and Liu regression techniques were applied; in models generated with both methods, the explanatory factor R² decreased to approximately 89 %, but the multicollinearity problem was effectively mitigated. Comparisons also showed that the Liu regression model outperformed the LASSO model in terms of information criteria (AIC, cAIC, BIC). The results show that regression methods with penalty terms provide reliable and consistent estimates in data sets with multicollinearity problems, and these techniques are recommended for the analysis and modeling of biological data.
- New
- Research Article
- 10.3390/e27101069
- Oct 14, 2025
- Entropy (Basel, Switzerland)
- Ruoyue Mao + 2 more
Classification is an important task in the field of machine learning. Decision tree algorithms are a popular choice for handling classification tasks due to their high accuracy, simple algorithmic process, and good interpretability. Traditional decision tree algorithms, such as ID3, C4.5, and CART, differ primarily in their criteria for splitting trees. Shannon entropy, Gini index, and mean squared error are all examples of measures that can be used as splitting criteria. However, their performance varies on different datasets, making it difficult to determine the optimal splitting criterion. As a result, the algorithms lack flexibility. In this paper, we introduce the concept of generalized entropy from information theory, which unifies many splitting criteria under one free parameter, as the split criterion for decision trees. We propose a new decision tree algorithm called RSE (RS-Entropy decision tree). Additionally, we improve upon a two-term information measure method by incorporating penalty terms and coefficients into the split criterion, leading to a new decision tree algorithm called RSEIM (RS-Entropy Information Method). In theory, the improved algorithms RSE and RSEIM are more flexible due to the presence of multiple free parameters. In experiments conducted on several datasets, using genetic algorithms to optimize the parameters, our proposed RSE and RSEIM methods significantly outperform traditional decision tree methods in terms of classification accuracy without increasing the complexity of the resulting trees.
- Research Article
- 10.3390/s25196243
- Oct 9, 2025
- Sensors (Basel, Switzerland)
- Hemalatha Gunasekaran + 3 more
Agriculture remains the backbone of many countries; it plays a pivotal role in shaping a country's overall economy. Accurate prediction in agriculture practices, particularly crop recommendations, can greatly enhance productivity and resource management. IoT and AI technologies have great potential for enhancing precision farming; traditional machine learning (ML) and ensemble learning (EL) models rely primarily on the training data for predictions. When the training data is noisy or limited, these models can result in inaccurate or unrealistic predictions. These limitations are addressed by incorporating physical laws into the ML framework, thereby ensuring that the predictions remain physically plausible. In this study, we conducted a detailed analysis of ML and EL models, both with and without optimization, and compared their performance against a physics-informed ML model. In the proposed stacking physics-informed ML model, the optimal temperature and the pH for each crop (physics law) are provided as input during the training process in addition to the training data. The physics-informed model was trained to simultaneously satisfy two objectives: (1) fitting the data, and (2) adhering to the physics law. This was achieved by including a penalty term within its total loss function, forcing the model to make predictions that are both accurate and physically feasible. Our findings indicate that the proposed novel stacking physics-informed model achieved a highest accuracy of 99.50% when compared to ML and EL models with optimization.