- New
- Research Article
- 10.3390/a18110706
- Nov 5, 2025
- Algorithms
- Ali Mahdavian + 2 more
Integrating recommendation systems with dynamic pricing strategies is essential for enhancing product sales and optimizing revenue in modern business. This study proposes a novel product recommendation model that uses Reinforcement Learning to tailor pricing strategies to customer purchase intentions. While traditional recommendation systems focus on identifying products customers prefer, they often neglect the critical factor of pricing. To improve effectiveness and increase conversion, it is crucial to personalize product prices according to the customer’s willingness to pay (WTP). Businesses often use fixed-budget promotions to boost sales, emphasizing the importance of strategic pricing. Designing intelligent promotions requires recommending products aligned with customer preferences and setting prices reflecting their WTP, thus increasing the likelihood of purchase. This research advances existing recommendation systems by integrating dynamic pricing into the system’s output, offering a significant innovation in business practice. However, this integration introduces technical complexities, which are addressed through a Markov Decision Process (MDP) framework and solved using Reinforcement Learning. Empirical evaluation using the Dunnhumby dataset shows promising results. Due to the lack of direct comparisons between combined product recommendation and pricing models, the outputs were simplified into two categories: purchase and non-purchase. This approach revealed significant improvements over comparable methods, demonstrating the model’s efficacy.
- New
- Research Article
- 10.3390/a18110705
- Nov 5, 2025
- Algorithms
- Mario Ivan Nava-Bustamante + 4 more
This paper presents a robust and reliable voltage regulation method in DC–DC converters, for which a multiloop control strategy is developed and analyzed for a boost converter. The proposed control scheme consists of an inner current loop and an outer voltage loop, both systematically designed using the control Lyapunov function (CLF) methodology. The main contributions of this work are (1) the formulation of a control structure capable of maintaining performance under variations in load, reference voltage, and input voltage; (2) the theoretical demonstration of global asymptotic stability of the closed-loop system in the Lyapunov sense; and (3) the experimental validation of the proposed controller on a physical DC–DC boost converter, confirming its effectiveness. The results support the advancement of high-efficiency nonlinear control methods for power electronics applications. Furthermore, the experimental findings reinforce the practical relevance and real-world applicability of the proposed approach.
- New
- Research Article
- 10.3390/a18110704
- Nov 5, 2025
- Algorithms
- Shuchong Wang + 6 more
Currently, thermal power units undertake the task of peak and frequency regulation, and their internal equipment is in a non-conventional environment, which could very easily fail and thus lead to unplanned shutdown of the unit. To realize the condition monitoring and early warning of the key equipment inside coal power units, this study proposes a deep learning-based equipment condition anomaly detection model, which combines the deep autoencoder (DAE), Transformer, and Gaussian mixture model (GMM) to establish an anomaly detection model. DAE and the Transformer encoder extract static and time-series features from multi-dimensional operation data, and GMM learns the feature distribution of normal data to realize anomaly detection. Based on the data verification of boiler superheater equipment and turbine bearings in real power plants, the model is more capable of detecting equipment anomalies in advance than the traditional method and is more stable with fewer false alarms. When applied to the superheater equipment, the proposed model triggered early warnings approximately 90 h in advance compared to the actual failure time, with a lower false negative rate, reducing the missed detection rate by 70% compared to the Transformer-GMM (TGMM) model, which verifies the validity of the model and its early warning capability.
- New
- Research Article
- 10.3390/a18110700
- Nov 4, 2025
- Algorithms
- Bin Yuan + 3 more
This paper presents a fault diagnosis model for rolling bearings that addresses the challenges of establishing long-sequence correlations and extracting spatial features in deep-learning models. The proposed model combines SENet with an improved Informer model. Initially, local features are extracted using the Conv1d method, and input data is optimized through normalization and embedding techniques. Next, the SE-Conv1d network model is employed to enhance key features while suppressing noise interference adaptively. In the improved Informer model, the ProbSparse self-attention mechanism and self-attention distillation technique efficiently capture global dependencies in long sequences within the rolling bearing dataset, significantly reducing computational complexity and improving accuracy. Finally, experiments on the CWRU and HUST datasets demonstrate that the proposed model achieves accuracy rates of 99.78% and 99.45%, respectively. The experimental results show that, compared to other deep learning methods, the proposed model offers superior fault diagnosis accuracy, stability, and generalization ability.
- New
- Research Article
- 10.3390/a18110702
- Nov 4, 2025
- Algorithms
- Shigang Wang + 2 more
Percutaneous puncture has become one of the most widely used minimally invasive techniques in clinical practice due to its advantages of low trauma, quick recovery and easy operation. However, incomplete needle tip movement, tissue barriers and complex distribution of sensitive organs make it difficult to balance puncture accuracy and safety. To this end, this paper proposes a new puncture path planning algorithm for flexible needles, which integrates gravitational guidance, bi-directional adaptive expansion, optimal node selection based on the A* algorithm, and path optimization strategies, with Bi-Rapid-Research Random Trees (Bi-RRTs) at its core, to significantly improve obstacle avoidance capability and computational efficiency. The simulation results of 2D and 3D complex scenes in MATLAB show that compared with the traditional RRT algorithm and Bi-RRT algorithm, the GBOPBi-RRT algorithm achieves significant advantages in terms of path length, computation time and node size. In particular, in the 3D environment, the GBOPBi-RRT algorithm shortens the planning path by 43.21% compared with RRT, 27.47% compared with RRT* and 30.33% compared with Bi-RRT, which provides a reliable solution for efficient planning of percutaneous puncture with flexible needles.
- New
- Research Article
- 10.3390/a18110703
- Nov 4, 2025
- Algorithms
- Ali Ahmed + 5 more
Pneumonia remains a serious global health issue, particularly affecting vulnerable groups such as children and the elderly, where timely and accurate diagnosis is critical for effective treatment. Recent advances in deep learning have significantly enhanced pneumonia detection using chest X-rays, yet many current methods still face challenges with interpretability, efficiency, and clinical applicability. In this work, we proposed a YOLOv11-based deep learning framework designed for real-time pneumonia detection, strengthened by the integration of Grad-CAM for visual interpretability. To further enhance robustness, the framework incorporated preprocessing techniques such as Contrast Limited Adaptive Histogram Equalization (CLAHE) for contrast improvement, region-of-interest extraction, and lung segmentation, ensuring both precise localization and improved focus on clinically relevant features. Evaluation on two publicly available datasets confirmed the effectiveness of the approach. On the COVID-19 Radiography Dataset, the system reached a macro-average accuracy of 98.50%, precision of 98.60%, recall of 97.40%, and F1-score of 97.99%. On the Chest X-ray COVID-19 & Pneumonia dataset, it achieved 98.06% accuracy, with corresponding high precision and recall, yielding an F1-score of 98.06%. The Grad-CAM visualizations consistently highlighted pathologically relevant lung regions, providing radiologists with interpretable and trustworthy predictions. Comparative analysis with other recent approaches demonstrated the superiority of the proposed method in both diagnostic accuracy and transparency. With its combination of real-time processing, strong predictive capability, and explainable outputs, the framework represents a reliable and clinically applicable tool for supporting pneumonia and COVID-19 diagnosis in diverse healthcare settings.
- New
- Research Article
- 10.3390/a18110701
- Nov 4, 2025
- Algorithms
- Julian Weaver + 4 more
Schizophrenia is challenging to identify from resting-state functional MRI (rs-fMRI) due to subtle, distributed changes and the clinical need for transparent models. We build on the Swin 4D fMRI Transformer (SwiFT) to classify schizophrenia vs. controls and explain predictions with Transformer Layer-wise Relevance Propagation (TransLRP). We further introduce Swarm-LRP, a particle swarm optimization (PSO) scheme that tunes Layer-wise Relevance Propagation (LRP) rules against model-agnostic explainability (XAI) metrics from Quantus. On the COBRE dataset, TransLRP yields higher faithfulness and lower sensitivity/complexity than Integrated Gradients, and highlights physiologically plausible regions. Swarm-LRP improves single-subject explanation quality over baseline LRP by optimizing (α,γ,ϵ) values and discrete layer-rule assignments. These results suggest that architecture-aware explanations can recover spatiotemporal patterns of rs-fMRI relevant to schizophrenia while improving attribution robustness. This feasibility study indicates a path toward clinically interpretable neuroimaging models.
- New
- Research Article
- 10.3390/a18110693
- Nov 3, 2025
- Algorithms
- Nurdaulet Tasmurzayev + 6 more
Background: Coronary artery disease (CAD) remains a leading cause of morbidity and mortality. Early diagnosis reduces adverse outcomes and alleviates the burden on healthcare, yet conventional approaches are often invasive, costly, and not always available. In this context, machine learning offers promising solutions. Objective: The objective of this study is to evaluate the feasibility of reliably predicting both the presence and the severity of CAD. The analysis is based on a harmonized, multi-center UCI dataset that includes cohorts from Cleveland, Hungary, Switzerland, and Long Beach. The work aims to assess the accuracy and practical utility of models built exclusively on routine tabular clinical and demographic data, without relying on imaging. These models are designed to improve risk stratification and guide patient routing. Methods and Results: The study is based on a uniform and standardized data processing pipeline. This pipeline includes handling missing values, feature encoding, scaling, an 80/20 train–test split and applying the SMOTE method exclusively to the training set to prevent information leakage. Within this pipeline, a standardized comparison of a wide range of models (including gradient boosting, tree-based ensembles, support vector methods, etc.) was conducted with hyperparameter tuning via GridSearchCV. The best results were demonstrated by the CatBoost model: accuracy—0.8278, recall—0.8407, and F1-score—0.8436. Conclusions: A key distinction of this work is the comprehensive evaluation of the models’ practical suitability. Beyond standard metrics, the analysis of calibration curves confirmed the reliability of the probabilistic predictions. Patient-level interpretability using SHAP showed that the model relies on clinically significant predictors, including ST-segment depression. Calibrated and explainable models based on readily available data are positioned as a practical tool for scalable risk stratification and decision support, especially in resource-constrained settings.
- New
- Research Article
- 10.3390/a18110696
- Nov 3, 2025
- Algorithms
- Vasso E Papadimitriou + 1 more
Early cost assessment is an essential part of building construction strategy; however, preliminary estimates are occasionally unreliable given incomplete data, which causes budgetary overruns. In general, traditional prediction techniques are imprecise and sluggish, particularly while the project scope is still unclear. By introducing a hybrid framework that utilizes ANNs for renovation cost estimation and features enhancements by the TOPSIS method to guarantee contextual relevance and input accuracy, the present research overcomes these drawbacks. Utilizing data from projects that are structurally and contextually comparable enhances the model’s predicted reliability and robustness. The study builds, trains, and tests two ANN models using IBM SPSS Statistics software, which is based on a thorough literature review and actual renovation data from construction businesses. One model utilized 53 data points from prior building renovation projects, whereas the second model employed 11 data points from post-TOPSIS technique building renovation projects. The Radial Basis Function (RBF) procedure is the basis for models that include 14 input data such as total initial cost, estimated completion time, initial demolition drainage cost, initial cost of plumbing work, initial heating cost, initial cost of electrical work, initial cost of masonry coatings, initial cost of plasterboard construction, initial bathroom cost, initial flooring costs, initial frame cost, initial door cost, initial paint cost, and initial kitchen construction cost, and one output data, the total final cost. The models show excellent performance with near 0.5 relative error and up to 0.3 monetary units sum of squares error before applying the TOPSIS method and nearly 0.6 relative error and up to 0.8 monetary units sum of squares error after the TOPSIS implementation, proving the usefulness and demonstrating the speed of the ANN in estimating overall renovation costs in combination with the TOPSIS approach. By employing this hybridized approach, the entire contingent procedure is expedited and accomplished more rapidly.
- New
- Research Article
- 10.3390/a18110697
- Nov 3, 2025
- Algorithms
- Nurdaulet Tasmurzayev + 6 more
Coronary artery disease (CAD) is a leading cause of global mortality, demanding accurate and early risk assessment. While machine learning models offer strong predictive power, their clinical adoption is often hindered by a lack of transparency and reliability. This study aimed to develop and rigorously evaluate a calibrated, interpretable machine learning framework for CAD prediction using 56 routinely collected clinical and demographic variables from the Z-Alizadeh Sani dataset (n = 303). A systematic protocol involving comprehensive preprocessing, class rebalancing using SMOTE, and grid-search hyperparameter tuning was applied to five distinct classifiers. The XGBoost model demonstrated the highest predictive performance, achieving an accuracy of 0.9011, an F1 score of 0.8163, and an Area Under the Receiver Operating Characteristic Curve (AUC) of 0.92. Post hoc interpretability analysis using SHAP (Shapley Additive Explanations) identified HTN, valvular heart disease (VHD), and diabetes mellitus (DM) as the most significant predictors of CAD. Furthermore, calibration analysis confirmed that the mode’s probability estimates are reliable for clinical risk stratification. This work presents a robust framework that combines high predictive accuracy with clinical interpretability, offering a promising tool for early CAD screening and decision support.