Enhancing explainable AI with graph signal processing: Applications in water distribution systems.
Enhancing explainable AI with graph signal processing: Applications in water distribution systems.
20
- 10.1016/j.watres.2023.120264
- Jun 24, 2023
- Water Research
18
- 10.1061/jwrmd5.wreng-5870
- Jul 1, 2023
- Journal of Water Resources Planning and Management
171
- 10.1061/(asce)wr.1943-5452.0000339
- Dec 6, 2012
- Journal of Water Resources Planning and Management
3
- 10.1016/j.jwpe.2024.105472
- May 17, 2024
- Journal of Water Process Engineering
697
- 10.1016/j.inffus.2023.101805
- Apr 18, 2023
- Information Fusion
200
- 10.1016/j.physrep.2023.03.005
- Apr 4, 2023
- Physics Reports
28
- 10.2166/hydro.2017.036
- Dec 8, 2017
- Journal of Hydroinformatics
146
- 10.1061/(asce)0733-9496(2005)131:3(172)
- May 1, 2005
- Journal of Water Resources Planning and Management
10
- 10.1109/tai.2023.3279808
- Feb 1, 2024
- IEEE Transactions on Artificial Intelligence
7
- 10.1016/j.watres.2024.122779
- Nov 9, 2024
- Water Research
- Research Article
- 10.30574/wjaets.2025.15.2.0635
- May 30, 2025
- World Journal of Advanced Engineering Technology and Sciences
The rapid advancements in artificial intelligence and machine learning have led to the development of highly sophisticated models capable of superhuman performance in a variety of tasks. However, the increasing complexity of these models has also resulted in them becoming "black boxes", where the internal decision-making process is opaque and difficult to interpret. This lack of transparency and explainability has become a significant barrier to the widespread adoption of these models, particularly in sensitive domains such as healthcare and finance. To address this challenge, the field of Explainable AI has emerged, focusing on developing new methods and techniques to improve the interpretability and explainability of machine learning models. This review paper aims to provide a comprehensive overview of the research exploring the combination of Explainable AI and traditional machine learning approaches, known as "hybrid models". This paper discusses the importance of explainability in AI, and the necessity of combining interpretable machine learning models with black-box models to achieve the desired trade-off between accuracy and interpretability. It provides an overview of key methods and applications, integration techniques, implementation frameworks, evaluation metrics, and recent developments in the field of hybrid AI models. The paper also delves into the challenges and limitations in implementing hybrid explainable AI systems, as well as the future trends in the integration of explainable AI and traditional machine learning. Altogether, this paper will serve as a valuable reference for researchers and practitioners working on developing explainable and interpretable AI systems. Keywords: Explainable AI (XAI), Traditional Machine Learning (ML), Hybrid Models, Interpretability, Transparency, Predictive Accuracy, Neural Networks, Ensemble Methods, Decision Trees, Linear Regression, SHAP (Shapley Additive Explanations), LIME (Local Interpretable Model-agnostic Explanations), Healthcare Analytics, Financial Risk Management, Autonomous Systems, Predictive Maintenance, Quality Control, Integration Techniques, Evaluation Metrics, Regulatory Compliance, Ethical Considerations, User Trust, Data Quality, Model Complexity, Future Trends, Emerging Technologies, Attention Mechanisms, Transformer Models, Reinforcement Learning, Data Visualization, Interactive Interfaces, Modular Architectures, Ensemble Learning, Post-Hoc Explainability, Intrinsic Explainability, Combined Models
- Research Article
- 10.69996/ijari.2025015
- Sep 30, 2025
- International Journal of Advance Research and Innovation
The increasing integration of Artificial Intelligence (AI) in financial services has significantly optimized the loan approval process. However, the lack of transparency and interpretability in AI decisions has raised concerns regarding trust, fairness, and accountability. This paper proposes a novel Explainable Artificial Intelligence (XAI) framework for predicting loan approval status and rejection reasons, thereby enhancing stakeholder trust in AI-driven financial decisions. The model utilizes SHAP (SHapley Additive exPlanations) values to interpret the contribution of each feature in classification tasks, offering granular insights into why a particular loan application is approved or denied. An experimental analysis of a real-world loan application dataset reveals that the proposed model achieves a high prediction accuracy of 90% in identifying rejection reasons, while maintaining explainability through both visual and numerical interpretations. The results demonstrate the effectiveness of XAI in making complex AI models interpretable and regulatory-compliant. This work contributes to building transparent, ethical, and reliable financial systems by integrating AI with human-understandable justifications for decisions.
- Research Article
61
- 10.1167/tvst.9.2.8
- Feb 12, 2020
- Translational Vision Science & Technology
PurposeRecently, laser refractive surgery options, including laser epithelial keratomileusis, laser in situ keratomileusis, and small incision lenticule extraction, successfully improved patients’ quality of life. Evidence-based recommendation for an optimal surgery technique is valuable in increasing patient satisfaction. We developed an interpretable multiclass machine learning model that selects the laser surgery option on the expert level.MethodsA multiclass XGBoost model was constructed to classify patients into four categories including laser epithelial keratomileusis, laser in situ keratomileusis, small incision lenticule extraction, and contraindication groups. The analysis included 18,480 subjects who intended to undergo refractive surgery at the B&VIIT Eye center. Training (n = 10,561) and internal validation (n = 2640) were performed using subjects who visited between 2016 and 2017. The model was trained based on clinical decisions of highly experienced experts and ophthalmic measurements. External validation (n = 5279) was conducted using subjects who visited in 2018. The SHapley Additive ex-Planations technique was adopted to explain the output of the XGBoost model.ResultsThe multiclass XGBoost model exhibited an accuracy of 81.0% and 78.9% when tested on the internal and external validation datasets, respectively. The SHapley Additive ex-Planations explanations for the results were consistent with prior knowledge from ophthalmologists. The explanation from one-versus-one and one-versus-rest XGBoost classifiers was effective for easily understanding users in the multicategorical classification problem.ConclusionsThis study suggests an expert-level multiclass machine learning model for selecting the refractive surgery for patients. It also provided a clinical understanding in a multiclass problem based on an explainable artificial intelligence technique.Translational RelevanceExplainable machine learning exhibits a promising future for increasing the practical use of artificial intelligence in ophthalmic clinics.
- Research Article
- 10.2174/0123520965401121250714075403
- Jul 28, 2025
- Recent Advances in Electrical & Electronic Engineering (Formerly Recent Patents on Electrical & Electronic Engineering)
Introduction: In the current educational landscape, universities provide a vast array of diverse and obscure courses. However, the primary challenge students face is decision overload, as the abundance of options can make it difficult to choose courses that align with their interests, career aspirations, and strengths. The development and application of proper course suggestion systems might assist students in overcoming the challenges associated with choosing the right courses. However, these recommendation systems still require improvement to address explainability issues, security issues, and cold start. Our analysis reveals that there is limited research addressing how course recommendation systems can assist 12th-grade students in selecting courses for higher education. Therefore, this study presents a novel personalized recommendation system, namely “Interpretability-Driven Course Recommendation: A Random Forest Approach with Explainable AI”, that chooses the top-3 courses for students based on their intermediate class grades/- marks. This research provides accurate and interpretable recommendations using the Random Forest algorithm with Explainable AI that ensures transparency by explaining why a particular course is recommended. Method: The system uses a Random Forest classifier with SHAP (SHapley Additive exPlanations) to forecast the top-3 courses with the greatest expected scores for appropriateness. The system uses SHAP (SHapley Additive exPlanations) values to integrate Explainable AI (XAI) to guarantee openness and trust. The proposed model solves the cold start problem and provides data security. Results: In terms of precision (0.90), recall (0.92), and F1-score (0.92), Random Forest fared better than any other classifier. By combining predictions from several decision trees, its ensemble learning technique increases stability, decreases overfitting, and improves generalization across a broad range of input data patterns. Discussions: A major drawback of conventional black-box machine learning models is their lack of transparency, which is addressed by the incorporation of SHAP into the course recommendation system. In this model, students get suggestions in addition to information on which topic scores had the biggest influence on their choice. Conclusion: This study presents a practical and interpretable course recommendation system designed for 12th-grade students transitioning into higher education. Through the use of Explainable AI-enhanced Random Forest, the model provides precise, transparent, and customized recommendations. It addresses the main drawbacks of conventional systems, such as their lack of explainability and their cold start problems. The model's excellent precision, recall, and F1-score indicate its superior performance, which makes it a useful tool for student guidance and academic advice. For even more thorough recommendations, future research may consider combining long-term academic objectives, aptitude test results, and student interests.
- Research Article
- 10.1002/eej.23510
- May 30, 2025
- Electrical Engineering in Japan
ABSTRACTElectric power systems with increasing photovoltaic (PV) systems face concerns regarding degradation in frequency stability due to heightened output forecast errors. As a countermeasure, given the dynamic factors like demand, PV output, and meteorological elements, calculating the optimal reserve margin (ORM) becomes crucial for economic efficiency and resilience reinforcement. To ensure an efficient ORM, Artificial Intelligence (AI) is one of the useful strategies used to analyze the combination of all the elements. However, AI is characterized by a black box problem, and to achieve transparency, AI needs to be transformed into explainable AI. To begin with, this paper analyzed all features importance using SHapley Additive exPlanations (SHAP), adopting a Gaussian process regression model. Then, relevant explanatory variables were selected to improve the prediction accuracy of the ORM. Finally, to verify the effectiveness, this paper planned day‐ahead scheduling while securing the ORM determined by the proposed method. It executed detailed demand/supply and system frequency simulations as an operation. The proposed method decreased the risk posed by PV output forecast errors and shortage of reserve margin. Also, the maximum PV capacity increased from 96.2% to 166.2% while maintaining frequency stability.
- Research Article
- 10.71310/pcam.2_64.2025.10
- May 15, 2025
- Проблемы вычислительной и прикладной математики
This article explores the significance of modifying SHAP (SHapley Additive exPlana tions) values to enhance model interpretability in machine learning. SHAP values provide a fair attribution of feature contributions, making AI-driven decision-making more trans parent and reliable. However, raw SHAP values can sometimes be difficult to interpret due to feature interactions, noise, and inconsistencies in scale. The article discusses key techniques for modifying SHAP values, including feature aggregation, normalization, cus tom weighting, and noise reduction, to improve clarity and relevance in explanations. It also examines how these modifications align interpretations with real-world needs, ensur ing that SHAP-based insights remain practical and actionable. By strategically refining SHAP values, data scientists can derive more meaningful explanations, improving trust in AI models and enhancing decision-making processes. The article provides a structured approach to modifying SHAP values, offering practical applications and benefits across various domains.
- Research Article
- 10.22214/ijraset.2025.70080
- Apr 30, 2025
- International Journal for Research in Applied Science and Engineering Technology
Financial forecasting is a cornerstone of investment strategy, economic planning, and risk mitigation. With the advent of Artificial Intelligence (AI), models such as Long Short-Term Memory (LSTM) networks and other deep learning techniques have drastically improved forecasting accuracy. However, the lack of transparency in these models has raised concerns, particularly in regulatory and high-stakes environments. Explainable Artificial Intelligence (XAI) addresses this limitation by offering interpretability into model behavior and predictions. This paper investigates the integration of XAI methods— particularly SHapley Additive exPlanations (SHAP)—into time series forecasting models like LSTM and Facebook Prophet. We apply these models to real-world datasets, including stock indices and foreign exchange rates, comparing their predictive performance and interpretability. Results show that XAI-enhanced models maintain high forecasting accuracy while offering actionable insights, making them suitable for both technical analysts and financial regulators. The study highlights the importance of transparency in AI-driven decision systems and proposes a balanced approach between predictive power and explainability.
- Research Article
- 10.71097/ijsat.v16.i3.6779
- Jul 26, 2025
- International Journal on Science and Technology
Explainable AI (XAI) has become an essential area within artificial intelligence, focusing on the necessity for clarity and comprehensibility in complex machine learning approaches. As artificial intelligence (AI) platforms continue to expand into major industries like medical banking, comprehending their decision-making procedures is vital for fostering credibility and maintaining ethical utilization. This Study mainly focuses on significant XAI methods: LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations), focusing on their procedures, benefits and software. This study analyses the challenges encountered by such approaches in mobile instances. We recommend methods to balance trade-offs between disclosure and effectiveness, including lightweight approximating methods, model trimming, and selecting on-device versus cloud-based preparation. The instances related to medical diagnostics and identifying fraud to demonstrate the real-world use of LIME and SHAP, highlighting their effectiveness in delivering interpretable insights. The future of XAI encompasses advancements in both hardware and software, the inclusion of ethical frameworks, and the potential of hybrid models to improve interpretability while handling current limitations. The findings highlight the necessity of choosing suitable XAI techniques tailored to particular contexts to enhance user trust and engagement in AI applications.
- Research Article
- 10.52783/jisem.v10i51s.10442
- May 30, 2025
- Journal of Information Systems Engineering and Management
In order to enhance transparency and interpretability, the main goal of this project is to create a hybrid deep learning model for fake news detection by fusing Explainable AI (XAI) techniques like SHapley Additive exPlanations (SHAP) with XLNet, FastText, and CNN Algorithm. Introduction: Fake news rapid spread in the digital age has turned into a significant issue that influences social stability, public opinion, and political outcomes . False information has spread by virtue of social media platforms' inability to distinguish between authentic and fraudulent content . Despite their effectiveness, traditional fact-checking methods are time-consuming and unable to handle the volume of data generated daily . As a result, automated systems for detecting false news that utilize advanced artificial Intelligence demonstrated impressive performance in text classification tasks,such as identifying false news. It is challenging to comprehend how these models make decisions, though, because they function as black-box systems. In order to improve interpretability, explainable AI (XAI) techniques have been developed. The SHapley Additive exPlanations (SHAP) method is one that offers details on model predictions . Objectives: The objective of this project is to develop a sophisticated fake news detection system that combines advanced natural language processing and machine learning techniques. By integrating XLNet for superior language understanding, FastText for efficient word representation, and Convolutional Neural Networks (CNNs) for robust feature extraction, the system aims to enhance detection accuracy. Additionally, incorporating Explainable AI techniques, particularly SHAP, will provide clear and interpretable explanations of the model's predictions. This dual focus on performance and transparency seeks to create a reliable tool for identifying misinformation, ultimately fostering greater public trust in digital information sources. Methods: Convolutional Neural Networks (CNN), XL Net, and SHAP with Fast Text are examples of Explainable AI (XAI) techniques that were used in the study's hybrid deep learning methodology. Group 1: Robert and Bert Although methods are effective, they are not transparent enough for users to comprehend and have faith in their predictions. Group 2: Explainable AI and Fat text were used in combination with the Hybrid Model. Results: The hybrid model's accuracy of 92.3% represents a 5.6% improvement over the baseline accuracy of 87.4%. This shows that the hybrid approach is more effective at correctly distinguishing between real and fake news articles. Additionally, the hybrid model is more effective at reducing false positives, as evidenced by its 90.5% accuracy, which is 6.2% higher than the baseline model's 85.2% accuracy. Similarly, from 86.1% in the baseline model to 91.8% in the hybrid model, the hybrid model's recall increases by 6.6%, indicating that it is better at spotting fake news. Finally, the F1-score, which strikes a balance between recall and precision, increased from 85.6% to 91.1%, a 6.4% improvement. Conclusions: By combining XL Net, Fast Text, CNN, and Explainable AI techniques, the proposed hybrid deep learning model significantly increases the accuracy of fake news detection while maintaining interpretability. This tactic provides a robust and transparent framework for effectively combating misinformation.
- Research Article
13
- 10.1016/j.comnet.2022.109466
- Nov 17, 2022
- Computer Networks
Artificial Intelligence (AI) has demonstrated superhuman capabilities in solving a significant number of tasks, leading to widespread industrial adoption. For in-field network-management application, AI-based solutions, however, have often risen skepticism among practitioners as their internal reasoning is not exposed and their decisions cannot be easily explained, preventing humans from trusting and even understanding them. To address this shortcoming, a new area in AI, called Explainable AI (XAI), is attracting the attention of both academic and industrial researchers. XAI is concerned with explaining and interpreting the internal reasoning and the outcome of AI-based models to achieve more trustable and practical deployment. In this work, we investigate the application of XAI for network management, focusing on the problem of automated failure-cause identification in microwave networks. We first introduce the concept of XAI, highlighting its advantages in the context of network management, and we discuss in detail the concept behind Shapley Additive Explanations (SHAP), the XAI framework considered in our analysis. Then, we propose a framework for a XAI-assisted ML-based automated failure-cause identification in microwave networks, spanning model’s development and deployment phases. For the development phase, we show how to exploit SHAP for feature selection and how to leverage SHAP to inspect misclassified instances during model’s development process, and how to describe model’s global behavior based on SHAP’s global explanations. For the deployment phase, we propose a framework based on predictions uncertainty to detect possibly wrong predictions that will be inspected through XAI.
- Supplementary Content
- 10.1093/eurpub/ckaf161.1294
- Oct 1, 2025
- The European Journal of Public Health
BackgroundWith the rising use of artificial intelligence (AI) in healthcare, the need for model transparency and interpretability is increasingly emphasized. This study aimed to identify key risk factors for diabetes and compare the predictive performance of an interpretable logistic regression (LR) model with advanced machine learning (ML) algorithms using explainable AI (XAI) tools.MethodsData were obtained from the 2016 Tunisian Health Examination Survey. LR was used to assess diabetes risk factors through adjusted odds ratios (aOR) and 95% confidence intervals (CI). ML models included Decision Tree (DT), Gradient Boosting (GB), Artificial Neural Network (ANN), and Random Forest (RF). Model performance was assessed via accuracy, recall, F1-score, and area under the curve (AUC). For interpretability, we used Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) to visualize feature importance.ResultsA total of 8,894 adults were included. The diabetes prevalence was 18.9% [17.1-21.2]. Significant predictors in LR included dyslipidemia (aOR=2.98 [2.88-3.30]), low socioeconomic status (aOR=1.27 [1.16-1.38]), urban residence (aOR=1.19 [1.02-1.35]), and higher BMI (aOR=1.19 [1.11-1.29]). LR achieved an accuracy of 82.8%, recall of 97.2%, F1-score of 90.1%, and AUC of 77.8%. Among ML models, DT performed the worst (AUC=59.6%). GB and RF outperformed other models in AUC (79.7% and 77.6%, respectively), accuracy (83.3% and 83.2%), and F1-score (90.4% and 90.3%). ANN showed the highest recall (98.6%). GB was selected as the top-performing model. SHAP and LIME consistently identified age, dyslipidemia, systolic blood pressure, waist circumference, BMI, and healthcare use as top predictors.ConclusionsThis study illustrates the utility of XAI in enhancing the interpretability of ML models for diabetes risk prediction. Further research is needed to validate findings and promote standardized evaluation frameworks for AI in healthcare.Key messages• Explainable AI methods can improve the transparency of machine learning models used for diabetes risk prediction in population health studies.• Gradient Boosting outperformed other models, while SHAP and LIME consistently highlighted key modifiable and socio-demographic risk factors.
- Research Article
4
- 10.52783/jes.1480
- Apr 4, 2024
- Journal of Electrical Systems
XAI is critical for establishing trust and enabling the appropriate development of machine learning models. By offering transparency into how these models make judgements, XAI enables researchers and users to uncover potential biases, admit limits, and eventually enhance the fairness and dependability of AI systems. In this paper, we demonstrates two techniques, LIME and SHAP, used to improve the interpretability of machine learning models. Assessing Explainable AI (XAI) approaches is critical in searching for transparent and interpretable artificial intelligence (AI) models. Explainable AI (XAI) approaches are designed to provide insight into how complex models make decisions. This paper thoroughly analyzes two prominent XAI methods: Shapley Additive explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME). This study aims to understand the decision made by a machine learning model and how the model came to that decision. We discuss the approaches and framework of both LIME and SHAP and assess their behavior in predicting the model's outcome.
- Research Article
9
- 10.52783/jes.1768
- Mar 31, 2024
- Journal of Electrical Systems
XAI is critical for establishing trust and enabling the appropriate development of machine learning models. By offering transparency into how these models make judgements, XAI enables researchers and users to uncover potential biases, admit limits, and eventually enhance the fairness and dependability of AI systems. In this paper, we demonstrates two techniques, LIME and SHAP, used to improve the interpretability of machine learning models. Assessing Explainable AI (XAI) approaches is critical in searching for transparent and interpretable artificial intelligence (AI) models. Explainable AI (XAI) approaches are designed to provide insight into how complex models make decisions. This paper thoroughly analyzes two prominent XAI methods: Shapley Additive explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME). This study aims to understand the decision made by a machine learning model and how the model came to that decision. We discuss the approaches and framework of both LIME and SHAP and assess their behavior in predicting the model's outcome.
- Research Article
1
- 10.1016/j.cageo.2024.105738
- Oct 16, 2024
- Computers and Geosciences
Lithological classification is essential for understanding the spatial distribution of rocks, especially in arid crystalline areas. Artificial intelligence (AI) recent advancements with multi-spectral satellite imagery have been utilized to enhance lithological mapping in these areas. Here we employed different AI models namely, Support Vector Machine (SVM), Random Forest Classification (RFC), Logistic Regression, XGBoost, and K-nearest neighbors (KNN) for lithological mapping. This was followed by the application of explainable AI (XAI) for lithological discrimination (LD) which is still not widely explored. Based on the highest accuracy and F1 score of the previously mentioned models, RFC model outperformed all of them, and hence, it was integrated with XAI, using the SHapley Additive exPlanations (SHAP) method.This approach successfully identified critical multi-spectral features for LD in arid crystalline zones when applied on the Landsat-8, Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), and SRTM-DEM datasets covering the Hammash and the Wadi Fatimah areas in Egypt and the Kingdom of Saudi Arabia, respectively. Field validation in the Hammash area confirmed the RFC model's efficacy, achieving a satisfactory 94% overall accuracy for 18 features. SHAP was able to identify the top ten features for proper LD over the Hammash area with 90.3% accuracy despite the complex nature of the ophiolitic mélange. For validation purposes, RCF was then utilized in the Wadi Fatimah region, using only the top 10 critical features rendered from the SHAP analysis. It performed well and had 93% accuracy. Notably, XAI/SHAP results indicated that elevation data, Landsat-8's Green Band (B3), and the two ASTER SWIR bands (B5 and B6) were essential and significant for identifying island arc rocks. Moreover, the SHAP model effectively delineated complex mélange matrices, primarily using ASTER SWIR band (B8). Our findings highlight the successful combination of RFC with XAI for LD and its potential utilization in similar arid crystalline environments worldwide.
- Research Article
1
- 10.60084/ijma.v3i1.301
- Jun 8, 2025
- Indatu Journal of Management and Accounting
As digital payment systems grow in volume and complexity, credit card fraud continues to be a significant threat to financial institutions. While machine learning (ML) has emerged as a powerful tool for detecting fraudulent activity, its adoption in managerial settings is hindered by a lack of transparency and interpretability. This study examines how explainable artificial intelligence (XAI) can enhance managerial oversight in the deployment of ML based fraud detection systems. Using a publicly available, simulated dataset of credit card transactions, we developed and evaluated four ML models: Logistic Regression, Naïve Bayes, Decision Tree, and Random Forest. Performance was assessed using standard metrics, including accuracy, precision, recall, and F1-score. The Random Forest model demonstrated superior classification performance but also presented significant interpretability challenges due to its complexity. To fill this gap, we applied SHAP (SHapley Additive exPlanations), a leading method for explaining the outputs of the Random Forest model. SHAP analysis revealed that transaction amount and merchant category were the most influential features in determining the risk of fraud. SHAP plots were used to make these insights accessible to non-technical stakeholders. The findings underscore the importance of XAI in promoting transparency, facilitating regulatory compliance, and fostering trust in AI-driven decisions. This study offers practical guidance for managers, auditors, and policymakers seeking to integrate explainable ML tools into financial risk management processes, ensuring that technological advancements are balanced with accountability and informed human oversight.
- New
- Research Article
- 10.1016/j.watres.2025.124198
- Nov 1, 2025
- Water research
- New
- Research Article
- 10.1016/j.watres.2025.124228
- Nov 1, 2025
- Water research
- New
- Research Article
- 10.1016/j.watres.2025.124207
- Nov 1, 2025
- Water research
- New
- Research Article
- 10.1016/j.watres.2025.124156
- Nov 1, 2025
- Water research
- New
- Research Article
- 10.1016/j.watres.2025.124253
- Nov 1, 2025
- Water research
- New
- Research Article
- 10.1016/j.watres.2025.124922
- Nov 1, 2025
- Water Research
- New
- Research Article
- 10.1016/j.watres.2025.124895
- Nov 1, 2025
- Water Research
- New
- Research Article
- 10.1016/j.watres.2025.124276
- Nov 1, 2025
- Water research
- New
- Research Article
- 10.1016/j.watres.2025.124279
- Nov 1, 2025
- Water research
- New
- Research Article
- 10.1016/j.watres.2025.124299
- Nov 1, 2025
- Water research
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.