Published in last 50 years
Articles published on Learning Methods
- New
- Research Article
- 10.1161/circ.152.suppl_3.4373195
- Nov 4, 2025
- Circulation
- Sahaj Patel + 9 more
Background: End-diastolic (ED) and end-systolic (ES) frames are critical for left ventricular (LV) volume measurements in echocardiography but show high inter- and intra-observer variability. Deep learning (DL) methods have emerged for ED/ES detection; however, these typically rely on manually annotated reference frames and often fail to generalize across different image types, such as contrast and non-contrast echocardiographic views. Methods: A fully automated novel framework was developed for localizing ED/ES frames in both contrast and non-contrast cine loops without the use of manual annotations. The process begins with the YOLO (v12) object detection DL model to identify the LV as a region of interest (ROI); alternatively, a fixed bounding box or no localization step may be used. The largest ROI is selected to crop the cine loop. Robust principal component analysis is then applied to decompose the cine into a low-rank matrix, followed by singular value decomposition to extract the top three left singular vectors ( U ). Pseudo-periodic cardiac cycles are identified in each U using the Spectral Dominance Ratio. Zero-crossings and their variances are computed, and the U with the lowest variance (with at least two cycles) is chosen to represent the cardiac cycle. A peak detection algorithm is used to identify local extrema corresponding to the ED/ES frames. Results: The method was validated using a UAB dataset (N=984; 912 contrast, 72 non-contrast) and the publicly available EchoNet-Dynamic dataset (N=10,030, non-contrast) for external validation. The YOLO model was trained exclusively on the UAB dataset (1394 images for training, 298 images for validation, and 300 images for testing). On the UAB test set, the model achieved a mean Average Precision (mAP50) of 0.994 and mAP50-95 of 0.717. Mean absolute errors (MAE) in the UAB dataset were 2.65 ± 2.95 frames (median 2) for ED and 1.58 ± 1.49 frames (median 1) for ES. In the EchoNet dataset, the MAE was 3.75 ± 4.02 frames (median 2) for ED and 2.72 ± 2.81 frames (median 2) for ES. The framework excluded 5 UAB and 115 EchoNet cases due to only one cardiac cycle in the U . Conclusion: A robust and generalizable framework has been presented for localizing ED/ES frames without reliance on manually labeled training data. This approach supports both contrast and non-contrast images and can function with or without DL-based ROI detection, offering a scalable fully automated solution for echocardiographic analysis.
- New
- Research Article
- 10.1161/circ.152.suppl_3.4369530
- Nov 4, 2025
- Circulation
- Jimmy Zheng + 11 more
Introduction/Background: Aortic valve calcification (AVC) quantification is recommended by guidelines as an imaging biomarker for aortic stenosis (AS) severity and progression yet remains underreported on routine chest computed tomography (CT). With nearly 20 million non-gated chest CTs performed annually in the U.S., automated AVC detection offers potential for opportunistic AS screening without additional radiation exposure or cost. Research Question/Hypothesis: We hypothesized that deep learning methods can accurately quantify AVC from non-gated, non-contrast chest CT scans with performance comparable to expert assessment. Methods/Approach: We developed a convolutional neural network to automatically detect and quantify AVC from non-gated chest CTs. The algorithm was trained and validated on 1,807 imaging studies across 8 large health systems in the U.S. and Brazil from 2021 to 2024. Model performance was evaluated on a holdout set of 239 CT studies from 33 sites across three U.S. geographic regions. The reference standard consisted of manual segmentations independently verified by at least two board-certified radiologists. Performance was evaluated by sensitivity, specificity, and Pearson correlation between algorithm-estimated and ground truth Agatston scores. Subgroup analyses across age categories, sex, geographic regions, CT manufacturers, and technical parameters were conducted. Results/Data: The deep learning algorithm demonstrated high correlation with expert reference standards (Pearson r = 0.99; 95% CI, 0.98-0.99; P <.001). Bland-Altman analysis showed minimal bias with mean difference of 5.2 AU (95% CI, -7.6 to 17.9 AU) and standard deviation of 99.9 AU. For detecting moderate-to-severe AS (>125 AU for females, >275 AU for males), sensitivity was 0.92 (95% CI, 0.80-0.97) and specificity was 0.98 (95% CI, 0.95-0.99). For severe AS (>600 AU for females, >1100 AU for males), sensitivity was 0.91 (95% CI, 0.62-0.98) and specificity was 1.00 (95% CI, 0.98-1.00). Performance remained consistent across demographic subgroups and CT technical parameters. Conclusion: This automated deep learning algorithm accurately quantifies AVC from routine chest CT scans with performance comparable to experts. Implementation into existing radiology workflows may enable opportunistic AS screening, potentially facilitating earlier identification and timely intervention. Prospective studies are needed to determine whether automated AVC screening improves clinical outcomes.
- New
- Research Article
- 10.1007/s10439-025-03872-2
- Nov 4, 2025
- Annals of biomedical engineering
- Hui Xu + 9 more
This study proposes a novel algorithm framework for optical coherence tomography (OCT) and carotid angiography co-registration (cACR), aiming to improve diagnostic precision and treatment planning for carotid artery disease. The OCT-cACR algorithm integrates an enhanced U-Net segmentation model and a marker detection algorithm for segmenting target carotid vessels and detecting OCT probe markers. Based on the segmented target region, the You Only Look Once (YOLO) algorithm is further utilized to detect and track the OCT probe marker. The acquisition time point of each frame served as the matching parameter to achieve registration between the two modalities. Following registration, an expert compared the identified marker position on each angiography frame with its corresponding actual location to measure the resulting geographical error. The accuracy of cACR was validated using four real clinical cases, with a geographical error of less than 0.35 mm as the evaluation criterion. The segmentation model achieved higher accuracy (Dice coefficient: 0.867 ± 0.166) compared to baseline U-Net models. OCT-cACR demonstrated an accuracy of 93.33 to 100% in four test cases, thus achieving precise alignment of angiography and OCT images. The proposed cACR approach is feasible and accurate and may serve as a promising tool for improving the diagnosis and treatment of carotid artery diseases.
- New
- Research Article
- 10.1021/acs.jpclett.5c02681
- Nov 4, 2025
- The journal of physical chemistry letters
- Jiayin Shi + 3 more
High ionic conductivity electrolytes are vital for ensuring robust lithium-ion battery performance, especially in low-temperature environments. In this study, we systematically investigated a novel chemical space comprising 2604 electrolyte formulations using high-throughput molecular dynamics (MD) simulations, integrating the OPLS-AA force field with the RESP2 charge model. This methodology accurately replicated Li+ solvation shell structures and identified numerous innovative electrolytes exhibiting high room-temperature ionic conductivities more than 10 mS/cm, and many of which were experimentally validated for the first time. By leveraging MD data sets and the machine learning method, the composition-property relationships governing Li+ solvation shell structure and ion transport in electrolytes were elucidated. Li+ solvation shell structures are primarily influenced by solvent concentration, molecular topology, and surface charge distribution, with the higher solvent concentrations enhancing Li+-molecule coordination numbers. Ionic conductivity of electrolyte is predominantly determined by viscosity, and the low-viscosity components such as PF6-, DOL, DME, EA, and DMC boost ionic conductivity, while TFSI-, DEC, and EMC tend to reduce it. Additionally, the high coordination numbers with weakly coordinating solvents leading to the larger localized Li+ interactions further enhance ion transport in electrolyte. Molecular descriptors, including HallKierAlpha and MaxPartialCharge, exhibit strong correlations with ionic conductivity, serving as the effective metrics for the large-scale screening tasks. Consequently, the optimal high-conductivity electrolytes should incorporate low-viscosity solvents with high coordination numbers, strong Li+ binding energies, elevated HallKierAlpha values, and reduced MaxPartialCharges. This synergistic integration of high-throughput simulations and machine learning offers a powerful approach for the discovery of advanced electrolytes.
- New
- Research Article
- 10.1097/js9.0000000000003727
- Nov 4, 2025
- International journal of surgery (London, England)
- Chunlin Zhao + 13 more
Accurate assessment of surgical incision recovery is crucial for home-based care and rehabilitation of patients. This study aims to develop and evaluate a novel surgical incision recognition method driven by a Vision-Language Foundation Model (VLFM) to improve incision recognition accuracy and optimize home-based management. A total of 1,008 surgical incision images from 865 postoperative patients between May 2022 and August 2023 from Center 1 were retrospectively included as the primary study cohort; 252 surgical incision images from 199 patients between September 2023 and December 2023 from Center 1 were included as the temporal validation cohort; and 183 surgical incision images from 130 patients between October 2023 and December 2023 from Center 2 were included as an independent external validation cohort. Seven categories of surgical incisions were defined and annotated using image processing software by wound care specialists. Our surgical incision recognition system (named DeepIncision) was developed based on Grounded Language-Image Pre-training VLFM. We compared the performance of DeepIncision with five traditional object detection deep learning methods and with non-medical personnels. The incision recognition performance was evaluated using average precision (AP), average Recall (AR), F1-score, and the area under the receiver operator characteristic curve. DeepIncision can efficiently recognize seven categories of surgical incisions: no abnormality, redness, suppuration, scab, tension blisters, ecchymosis around the incision, and dehiscence. The AP for the temporal validation cohort was 68.50%, and that for the external validation cohort was 57.85%, both significantly outperforming other deep learning methods and non-expert manual recognition (P<0.01). Compared to the average performance of non-medical personnels (AP=3.80%, AR=15.30%) and non-wound specialist medical staffs (AP=41.00%, AR=52.50%), DeepIncision (AP=68.50%, AR=96.91%) achieved absolute AP improvements of 64.7% and 27.5%, and absolute AR improvements of 81.61% and 44.41%, respectively. DeepIncision yields automatic and accurate detection and recognition of surgical incisions, assisting patients with home-based incision management and offers real-time feedback on incisions, which enhances patient self-management and promotes effective home care and rehabilitation of surgical incisions.
- New
- Research Article
- 10.1161/circ.152.suppl_3.4343195
- Nov 4, 2025
- Circulation
- Kamil Faridi + 8 more
Background: Cardiovascular risk models generally use standard statistical methods that limit their performance for common outcomes and don’t allow for prediction of rare outcomes. Machine learning methods may address these shortcomings. The objective of this study was to determine how performance would vary across methods for prediction of in-hospital clinical events in the NCDR Left Atrial Appendage Occlusion (LAAO) Registry. Methods: Using data from the LAAO Registry, logistic regression (LR), LASSO, and XG Boost were used to predict the combined outcome of all major in-hospital adverse events (MAE), as well as 11 individual events for patients undergoing transcatheter LAAO. Randomly selected 70% development and 30% validation cohorts were used for model creation and to assess performance. Models were assessed using the 16 original variables used in the previously developed logistic regression risk model and with an expanded list of 51 total variables. Results: The development cohort included data from 57,192 transcatheter LAAO procedures performed with the Watchman FLX device, and the validation cohort included 24,511 procedures. The overall incidence rate for the composite of all MAE was 1.39%, with rates for individual events ranging from <0.01% up to 1.13%. XG Boost had the best performance for combined MAE using original model variables in the validation cohort (AUC 0.648 [95% CI 0.626-0.670] vs. 0.630 [95% CI 0.608-0.642] for LR and 0.638 [95% CI 0.626-0.670] for LASSO). When an expanded list of candidate variables were included, XG Boost (AUC 0.653 [95% CI 0.635-0.671]) performed marginally better than LASSO (AUC 0.644 [95% CI 0.628-0.660]) for MAE in the validation cohort, whereas LR performed poorly (AUC 0.515 [95% CI 0.501-0.529]). Performance across all methods declined and was generally worse for most infrequent events ( Figure ). XG Boost generally outperformed the other model types and performed better with an expanded list of variables, but this was not consistently the case, particularly for very rare events. However, prediction of mortality using XG Boost was incrementally better. Conclusions: In a nationwide registry cohort, the XG Boost machine learning method improved prediction of a composite of all MAE and several individual events over standard methods, particularly when using an expanded list of variables. Prediction of rare mortality events was also improved, although this was not consistently the case for other rare outcomes.
- New
- Research Article
- 10.1515/jag-2025-0076
- Nov 4, 2025
- Journal of Applied Geodesy
- Jyothi Ravi Kiran Kumar Dabbakuti + 4 more
Abstract Total electron content (TEC) is a important parameter in the domains of space weather studies and Global Navigation Satellite System (GNSS)-based navigation and communication applications. Conventional linear forecasting models face difficulties in effectively representing the complex nonlinear behaviors of the ionospheric dynamics. On the other hand, nonlinear approaches derived from advanced learning methods offer higher accuracy, but they necessitate substantial computational resources, building them impractical for real-time use in resource-constrained IoT environments. The emergence of Internet of Things (IoT) technology has facilitated the accessibility of affordable GNSS data associated through cloud platforms, allowing for ongoing and instantaneous collection of TEC data. In this paper, an efficient Successive Variational Mode Decomposition (SVMD) and Random Vector Functional Link (RVFL) framework is implemented to predict TEC via cloud platforms through Think Speak channels. The TEC observations from the year 2018 at Bengaluru (Geographic: 13.02° N, 77.57° E) is consider for analysis. The SVMD adaptively decomposes the TEC signal without requiring predefined mode selection, while RVFL enables fast training using random weights, direct connections, and universal approximation capabilities. The proposed model was evaluated using GNSS data from Bengaluru (13.02° N, 77.57° E). The results demonstrate that the SVMD–RVFL has an accuracy of 0.55 TECU for Root Mean Square Error (RMSE), 0.61 TECU for Mean Absolute Error (MAE), 7.64 % for Mean Absolute Percentage Error (MAPE), a correlation coefficient of 99.32 % and a training time of 3.82 s. The proposed approach demonstrates high precision and a low computational load, making it suitable for real-time ionospheric monitoring systems and IoT technologies.
- New
- Research Article
- 10.1007/s10916-025-02288-8
- Nov 4, 2025
- Journal of medical systems
- Resul Adanur + 3 more
Electrocorticography (ECoG) signals provide a valuable window into neural activity, yet their complex structure makes reliable classification challenging. This study addresses the problem by proposing a feature-selective framework that integrates multiple feature extraction techniques with statistical feature selection to improve classification performance. Power spectral density, wavelet-based features, Shannon entropy, and Hjorth parameters were extracted from ECoG signals obtained during a visual task. The most informative features were then selected using analysis of variance (ANOVA), and classification was performed with several machine learning methods, including decision trees, support vector machines, neural networks, and long short-term memory (LSTM) networks. Experimental results show that the proposed framework achieves high accuracy across individual patients as well as the combined dataset, with clear separability between classes confirmed through t-SNE visualization. In addition, analysis of selected features highlights the prominent role of electrodes located near the visual cortex, providing insights into the spatial distribution of neural activity.
- New
- Research Article
- 10.1161/circ.152.suppl_3.4362152
- Nov 4, 2025
- Circulation
- Monique Gardner + 9 more
Introduction: Neonates undergoing surgery for congenital heart disease (CHD) are at risk of death, longer intensive care unit (ICU) stays, and readmissions. Current prognostic models rely predominantly on unmodifiable clinical factors. Aims: We aimed to compare machine learning methods for predicting ICU-30, a validated composite outcome after neonatal surgery with cardiopulmonary bypass (CPB), using pre-operative blood biomarkers versus clinical features. Hypothesis: Inflammatory and organ injury biomarkers will predict ICU-30 better than clinical factors alone. Methods: Plasma and clinical data were collected from consecutively enrolled neonates (<30 days of age) immediately before CPB. Twenty-eight biomarkers were measured via individual or multiplexed ELISA and tested for association with ICU-30, defined as (1) mortality within 30 days of CPB, (2) ICU stay >30 days after CPB, or (3) ICU readmission within 30 days after CPB. Predictive performance of the machine learning techniques XGBoost, LASSO, and random forest (RF) were compared based on area under the receiver operating characteristic (AUROC) curve. For XGBoost, we identified the top 20 features based on importance (gain) to the model. Results: Biomarkers were available for 144 patients. Most were male (51%), White (64%), and non-Hispanic (75%). Median age at surgery was 3.6 days (IQR 2.6-5.2). Most operations were STAT 3 (32%) or STAT 5 (28%). ICU-30 occurred in 29 (20%) subjects. Using clinical and biomarker data, XGBoost performed moderately well (AUROC 0.75) to predict ICU-30, and was superior to RF (AUROC 0.73) and LASSO (AUROC 0.71) (Figure 1). Biomarkers alone performed better (AUROC 0.72) than clinical data alone (AUROC 0.61) (Figure 2). For XGBoost, 19 of the top 20 features were biomarkers, with the top 3 being neutrophil gelatinase-associated lipocalin (NGAL), trefoil factor 3 (TTF3) and growth differentiation factor-15 (GDF-15) (Figure 3). Conclusions: This preliminary data demonstrates moderate performance of a multi-biomarker model for predicting poor outcome post-CPB for neonatal CHD. A biomarker-only model outperformed a clinical-only model. Two of the top three most important features were biomarkers for renal and gut inflammation, suggesting preoperative splanchnic inflammation may be relevant for postoperative outcomes. Further investigation may identify mechanistic pathways for improved prognostication and could offer insights for targeted therapeutic interventions.
- New
- Research Article
- 10.1088/1361-6501/ae0e8a
- Nov 4, 2025
- Measurement Science and Technology
- Yadong Jiang + 5 more
Abstract In the field of fault diagnosis, transfer learning methods have achieved remarkable progress in rolling bearing fault diagnosis. However, existing approaches still face challenges in feature extraction and feature alignment between source and target domains, which remains difficult for feature networks to effectively capture both local and global features, and the distribution discrepancies between the two domains further degrade the performance of transfer models. To address these issues, this paper proposes a multilayer domain-adaptive fault diagnosis method that integrates global and local feature representations (MLDA-GLFD). Specifically, an enhanced local feature extraction module (SLFE) is designed by combining depthwise separable convolution and partial convolution to capture local feature information more precisely. In addition, a global feature extraction module (GFAB) is constructed, which incorporates a multi-head self-attention mechanism (MHSA), a global context block (GCblock) and a pyramid pooling module (PPM) to jointly strengthen global feature extraction. To further achieve feature distribution alignment between the source and target domains, a dynamic convolution (DConv) module with a hierarchical domain alignment mechanism is designed to adaptively adjust the receptive field of convolutional kernels. Moreover, a combination of Maximum Mean Discrepancy (MMD) and Multiple Kernel MMD (MKMMD) is employed to accurately align inter-domain features, thereby enhancing the model’s transferability to the target domain. Experimental results on two rolling bearing datasets, CWRU and SDUST, demonstrate that the proposed MLDA-GLFD method achieves average accuracies of 96% and 94%, respectively, significantly outperforming other comparative methods.
- New
- Research Article
- 10.47772/ijriss.2025.910000036
- Nov 3, 2025
- International Journal of Research and Innovation in Social Science
- Dr Nor Hafizi Bin Yusof + 2 more
Back Ground: Talaqqi and musyafahah are face-to-face learning methods that involve direct transmission of knowledge from teacher to student. In the context of reciting the al-Quran, this act was inherited from the Archangel Gabriel (PBUH) and later implemented by Prophet Muhammad (PBUH) to the following generations, and the practice has continued until today. This method should be applied during teaching and learning sessions in al-Quran recitation classes. However, the current school education system is burdened with a congested academic curriculum and various other non-academic programs that has posed challenges in implementing talaqqi and musyafahah when teaching the al-Quran. This issue also affects students enrolled in al-Quran memorising (tahfiz) programs. Objective: This study aimed to examine teachers’ perception regarding the implementation of talaqqi and musyafahah in the Tamayyuz program as well as learning the al-Quran by Tamayyuz students, in addition to also analysing teachers’ views and suggestions for improving the program. Methods: The design of this study was a combination of qualitative and quantitative approaches. The data collection method for the qualitative study used the methods of document analysis and interview. Meanwhile, the data collection method for the quantitative study used the questionnaire and performance test instruments. The compiled data were analyzed using Statistical Package for Social Sciences (SPSS 23.0) via Crosstab analysis. Results: Findings indicate that most teachers have a positive perception of the talaqqi and musyafahah methods used in teaching and learning the al-Quran in schools, while the majority of Tamayyuz teachers also practiced this method in their al-Quran teaching sessions. However, several issues in its implementation were identified, such as a lack of teaching skills for demonstrating proper recitation and limited exposure to effective talaqqi and musyafahah techniques. Furthermore, students were burdened with heavy academic loads and tight schedules that posed challenges when fully engaging with these methods. Conclusion: Findings of this study can help relevant parties improve the effective implementation of talaqqi and musyafahah methods in the Tamayyuz program, ultimately contributing to students’ excellence.
- New
- Research Article
- 10.1093/bioinformatics/btaf601
- Nov 3, 2025
- Bioinformatics (Oxford, England)
- Jan Pielesiak + 3 more
Most widely used methods for evaluating RNA 3D structure models require experimental reference structures, which restricts their use for novel RNAs. They also often overlook recurrent structural features shared across multiple predictions of the same sequence. Although consensus approaches have proven effective in RNA sequence analysis and evolutionary studies, no existing tool applies these principles to evaluate ensembles of 3D models. This gap hampers the identification of native-like folds in computational predictions, particularly as AI-driven methods become increasingly prevalent. This paper presents RNAtive, the first computational tool to apply consensus-derived secondary structures for reference-free evaluation of RNA 3D models. RNAtive aggregates recurrent base-pairing and stacking interactions across ensembles of predicted 3D structures to construct a consensus secondary structure. It introduces a novel conditionally weighted consensus mode that treats interaction networks as fuzzy sets and uniquely allows integration of user-defined 2D structural constraints, enabling evaluation guided by experimental data. Input RNA models are ranked using two adapted binary-classification-based scores. Benchmarking against CASP15 competition data shows that models consistent with the consensus exhibit native-like structural features. The RNAtive web server offers an intuitive platform for comparing and prioritizing RNA 3D predictions, providing a scalable solution to address the variability inherent in deep learning and fragment-assembly methods. By bridging consensus principles with 3D structural analysis, RNAtive advances the exploration of RNA conformational landscapes and has potential applications in fields like therapeutic RNA design. RNAtive is a freely accessible web server with a modern, user-friendly interface, available for scientific, educational, and commercial use at https://rnative.cs.put.poznan.pl/. Supplementary data are available at Bioinformatics online.
- New
- Research Article
- 10.61132/damai.v2i4.1331
- Nov 3, 2025
- Damai : Jurnal Pendidikan Agama Kristen dan Filsafat
- Lisdayanti Tinambunan + 5 more
Learning strategies are both a systematic approach and an art for managing the learning process so that learning objectives can be achieved effectively and efficiently. As a method, learning strategies are designed based on specific principles and rules derived from learning theory and educational research. This makes learning strategies a distinct field of knowledge that can be studied, developed, and applied scientifically. As an art, learning strategies demonstrate an educator's ability to creatively and flexibly utilize various learning resources, methods, and media according to student characteristics and the learning environment. A teacher who possesses sensitivity and intuition in managing learning can create a pleasant atmosphere and motivate students to be active and independent in their learning. Without a clear and directed strategy, the learning process tends to be haphazard, unfocused, and difficult to achieve. Therefore, planning and implementing appropriate learning strategies are crucial factors in the success of the educational process, as they determine the extent to which teaching and learning activities can proceed optimally and produce meaningful learning outcomes for students.
- New
- Research Article
- 10.3389/fcomp.2025.1676362
- Nov 3, 2025
- Frontiers in Computer Science
- Evita Roponena + 2 more
Information and communication technology (ICT) is crucial for maintaining efficient communications, enhancing processes, and enabling digital transformation. As ICT becomes increasingly significant in our everyday lives, ensuring its security is crucial for maintaining digital trust and resilience against evolving cyber threats. These technologies generate a large amount of data that should be analyzed simultaneously to detect threats to an ICT system and protect the sensitive information it may contain. NetFlow is a network protocol that can be used to monitor network traffic, collect Internet Protocol (IP) addresses, and detect anomalies in NetFlow. The article follows the design science research (DSR) methodology to reach an objective of providing a methods for developing a set of features for NetFlow analysis with a machine learning. The sets of features were analyzed and validated by implementing anomaly detection with the K-means clustering algorithm and time-series forecasting using the long short-term memory (LSTM) method. The study provides two separate sets of features for both machine learning methods (24 features for clustering and 14 for LSTM), an overview of the anomaly detection methods used in this research and a method to combine both machine learning approaches. Furthermore, this study introduces a method that integrates the outputs of both models and evaluates the reliability of the final decision based on Bayes' theorem and previous performance of the models.
- New
- Research Article
- 10.1111/nyas.70139
- Nov 3, 2025
- Annals of the New York Academy of Sciences
- Hasan Zan
The automatic classification of dermoscopic images is essential for the early diagnosis and treatment of skin cancer. However, this task remains challenging due to high visual similarity among lesion types, variations in lesion appearance across progression stages, and the presence of artifacts in the images. While deep learning-based approaches have outperformed traditional machine learning methods, many existing models are computationally intensive and offer limited interpretability. These limitations hinder their integration into clinical workflows where efficiency and transparency are critical. In this study, I propose a framework based on focal modulation networks (FMNs) for skin lesion classification. FMNs are designed to efficiently capture both local and global features, addressing the limitations of transformer-based models in processing high-resolution medical images. I evaluate four FMN variants, namely, Tiny, Small, Base, and Large, on three public datasets: ISIC 2017, ISIC 2018, and ISIC 2019. The highest classification accuracy was obtained on ISIC 2019 with 97.8%, followed by 96.4% on ISIC 2018, and 88.1% on ISIC 2017. These results match or exceed those reported in several previous studies. Additionally, FMNs offer model interpretability through modulator visualization. Overall, the proposed method provides an accurate, efficient, and transparent solution for automated skin lesion classification.
- New
- Research Article
- 10.1021/acs.jafc.5c09417
- Nov 3, 2025
- Journal of agricultural and food chemistry
- Bohan Xu + 7 more
Traditional agricultural urease inhibitors encounter a low inhibition efficiency and a short duration of action. Therefore, the typical traditional urease inhibitor n-butylthiophosphoric acid triamide (NBPT) was modified by molecular docking and molecular dynamics and combined with machine learning methods in this study. Seven out of 79 designed urease inhibitor substitutes were screened with improved urease inhibition potential of 21.30-48.91% compared with NBPT. After using the improved G1 evaluation method, four were selected as alternative urease inhibitor substitutes, with a reduction of 22.30-47.40% and an increase of 25.30-47.71% in soil toxicity and the urease inhibition potential, respectively. In addition, through simulation of functional properties, it was found that the alternatives were expected to reduce ammonia volatilization rate by 65.16-76.81% and N2O emissions by 30.53-57.60% after application, and the soil half-life was 2.5 times that of NBPT. Mechanistic analysis of protein-ligand interactions revealed that the number of hydrogen bonds and π-π stacking interactions are the primary intrinsic factors driving the improved urease inhibition of the substitute molecules. This study provides a new solution for reducing nitrogen loss, lowering greenhouse gas emissions, alleviating agricultural nonpoint source pollution, and promoting the research and development of green fertilizers.
- New
- Research Article
- 10.1007/s11356-025-37040-9
- Nov 3, 2025
- Environmental science and pollution research international
- Valentine Conny Putri Perdana + 4 more
Water quality monitoring plays a critical role in environmental protection and public health, particularly in the context of growing ecological challenges and the need for sustainable resource management. This study proposes and evaluates a predictive classification framework for assessing water pollution levels using machine learning techniques-support vector machine (SVM) and extreme gradient boosting (XGBoost)-within the Pollution Mitigation Classification (PMC) scheme. The models were optimized using the Synthetic Minority Over-sampling Technique (SMOTE-Tomek) resampling technique to address data imbalance. XGBoost demonstrated superior performance with an accuracy of 98.76% and an F1-Macro score of 97.62%, while SVM achieved an accuracy of 90.25% and an F1-Macro score of 83.57%. Interpretability analyses via SHAP and LIME revealed that biological and chemical indicators, such as fecal coliform, BOD, and COD, had the highest feature importance. Validation using dummy features confirmed that both models learned meaningful patterns rather than fitting to noise or spurious correlations. Beyond statistical accuracy, this research integrates a regulatory compliance validation against Indonesia's Government Regulation No. 22/2021 (Class II water quality standards). Findings indicate that several predictions labeled as "Safe" by the models violated one or more legal thresholds, raising concerns over potential false-safe classifications. To mitigate this risk, the study proposes the implementation of a regulatory-aware layer, comprising rule-based validation modules, probabilistic calibration methods (e.g., Platt Scaling), and early warning systems to enhance real-world applicability. The proposed framework underscores the importance of harmonizing predictive performance with legal compliance, offering a scalable, interpretable, and policy-aligned solution for AI-driven environmental monitoring systems.
- New
- Research Article
- 10.1002/prot.70076
- Nov 3, 2025
- Proteins
- Andriy Kryshtafovych + 4 more
CASP16 is the most recent in a series of community experiments to rigorously assess the state of the art in areas of computational structural biology. The field has advanced enormously in recent years: in early CASPs, the assessments centered around whether the methods were at all useful. Now they mostly focus on how near we are to not needing experiments. In most areas, deep learning methods dominate, particularly AlphaFold variants and associated technology. In this round, there is no significant change in overall agreement between calculated monomer protein structures and their experimental counterparts, not because of method deficiencies but because, for most proteins, agreement is likely as high as can be obtained given experimental uncertainty. For protein complexes, huge gains in accuracy were made in the previous CASP, but there still appears to be room for further improvement. In contrast to these encouraging results, for RNA structures, the deep learning methods are notably unsuccessful at present and are not superior to traditional approaches. Both approaches still produce very poor results in the absence of structural homology. For macromolecular ensembles, the small CASP target set limits conclusions, but generally, in the absence of structural templates, results tend to be poor and detailed structures of alternative conformations are usually of relatively low accuracy. For organic ligand-protein structures and affinities (important for aspects of drug design), deep learning methods are substantially more successful than traditional ones on the relatively easy CASP target set, though the results often fall short of experimental accuracy. In the less glamorous but essential area of methods for estimating the accuracy, previous results found reliable accuracy estimates at the amino acid level. The present CASP results show that the best methods are also largely effective in selecting models of protein complexes with high interface accuracy. Will upcoming method improvements overcome the remaining barriers to reaching experimental accuracy in all categories? We will have to wait until the next CASP to find out, but there are two promising trends. One is the combination of traditional physics-inspired methods and deep learning, and the other is the expected increase in training data, especially for ligand-protein complexes.
- New
- Research Article
- 10.1093/jamiaopen/ooaf138
- Nov 3, 2025
- JAMIA Open
- Bokai Zhao + 7 more
BackgroundCritically ill patients are managed with complex medication regimens that require medication management to optimize safety and efficacy. When performed by a critical care pharmacist (CCP), discrete medication management activities are termed medication interventions. The ability to define CCP workflow and intervention timeliness depends on the ability to predict the medication management needs of individual intensive care unit (ICU) patients. The purpose of this study was to develop prediction models for the number and intensity of medication interventions in critically ill patients.MethodsThis was a retrospective, observational cohort study of adult patients admitted to an ICU between June 1, 2020 and June 7, 2023. Models to predict number of pharmacist interventions using both patient and medication related predictor variables collected at either baseline or in the first 24 hours of ICU stay were created. Both regression and supervised machine learning models (Random Forest, Support Vector Machine, and XGBoost) were developed. Root mean square derivation (RMSE), mean absolute error (MAE), and symmetric mean absolute percentage error (sMAPE) were calculated.ResultsIn a cohort of 13 373 patients, the average number of interventions was 4.7 (standard deviation [SD] 7.1) and intervention intensity was 24.0 (40.3). Among the ML models, the Random Forest model had the lowest RMSE (9.26) while Support Vector Machine had the lowest MAE (4.71). All machine learning models performed similarly to the stepwise logistic regression model, and these performed better than a base model combining severity of illness with medication regimen complexity scores.ConclusionsIntervention quantity can be predicted using prediction models that incorporate patient-specific factors in the first 24 hours of admission. In this case, machine learning methods did not provide a substantial advantage in performance, but given that inter-institutional variation in intervention documentation precludes external validation, our results provide a framework for workload modeling at any institution where the proposed models here could be evaluated.
- New
- Research Article
- 10.1186/s12915-025-02432-3
- Nov 3, 2025
- BMC Biology
- Li Zeng + 4 more
BackgroundAs the core functional carrier of life activities, the quality of protein representation directly affects the accuracy of downstream functional prediction. In recent years, multimodal deep learning methods have significantly improved the effectiveness of protein representation learning by virtue of their advantages in fusing sequence, structure, and chemical characteristics. However, current research still faces two core challenges: first, the guiding mechanism for structural information during multi-modal feature interaction has not been fully explored; second, existing fusion strategies mostly use static weight allocation mechanisms, which is difficult to adapt to sequence-structural features. The dynamic correlation between features leads to limited accuracy in identifying key functional residues.ResultsWe proposed ProGraphTrans, a multimodal dynamic collaborative framework for protein representation learning. ProGraphTrans builds a dynamic attention multimodal fusion mechanism and captures local sequential patterns through a multi-scale convolutional neural network.ConclusionsExperimental results on four protein downstream tasks show that ProGraphTrans not only outperforms other methods in various indicators but also demonstrates excellent interpretability, demonstrating its advantages and effectiveness as a protein representation method.