Discovery Logo
Sign In
Search
Paper
Search Paper
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Literature Review iconLiterature Review NEW
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Literature Review iconLiterature Review NEW
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link

Related Topics

  • Field Of Artificial Intelligence
  • Field Of Artificial Intelligence
  • Artificial Intelligence Systems
  • Artificial Intelligence Systems
  • Artificial Intelligence Technology
  • Artificial Intelligence Technology
  • Artificial Intelligence Research
  • Artificial Intelligence Research
  • Artificial Intelligence
  • Artificial Intelligence

Articles published on Explainable AI

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
10306 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.1038/s41598-026-43507-7
Transparent AI for mathematics: transformer-based large language models for mathematical entity relationship extraction with XAI.
  • Mar 11, 2026
  • Scientific reports
  • Tanjim Taharat Aurpa

Mathematical text understanding is a challenging task due to the presence of specialized entities and complex relationships between them. This study formulates mathematical problem interpretation as a Mathematical Entity Relation Extraction (MERE) task, where operands are treated as entities and operators as their relationships. Transformer-based models are applied to automatically extract these relations from mathematical text, with Bidirectional Encoder Representations from Transformers (BERT) achieving the best performance, reaching an accuracy of 99.39%. To enhance transparency and trust in the model's predictions, Explainable Artificial Intelligence (XAI) is incorporated using Shapley Additive Explanations (SHAP). The explainability analysis reveals how specific textual and mathematical features influence relation prediction, providing insights into feature importance and model behavior. By combining transformer-based learning, a task-specific dataset, and explainable modeling, this work offers an effective and interpretable framework for MERE, supporting future applications in automated problem solving, knowledge graph construction, and intelligent educational systems.

  • New
  • Research Article
  • 10.1038/s41598-026-38218-y
Ensemble-based high-performance deep learning models for medical image retrieval in breast cancer detection.
  • Mar 11, 2026
  • Scientific reports
  • Aya E Fawzy + 3 more

As digital imaging in healthcare grows quickly, dealing with vast medical image data is getting trickier. Content-Based Medical Image Retrieval (CBMIR) systems help with this, but they struggle because of the gap between simple image details and what these images mean in a clinical setting. This paper presents a new approach using deep learning for CBMIR that combines Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Explainable AI (XAI). Using the Breast Ultrasound Image (BUSI) dataset for training, this hybrid model classifies images and finds the relevant results based on predictions. It reaches a classification accuracy of 99.24% and performs well in retrieval tasks.

  • New
  • Research Article
  • 10.55041/ijsrem57466
Secure Interpretable Deep Convolutional Network (SIDCN) for Malware Detection
  • Mar 11, 2026
  • International Journal of Scientific Research in Engineering and Management
  • Dr N.Mahendiran + 1 more

Abstract Machine learning (ML) and deep learning (DL) approaches have become parts of modern malware detection systems because of their capabilities to evaluate complex and large amounts of data. Unfortunately, while many models have demonstrated strong detection accuracies in laboratory settings, they have significant limitations when placed in operational, security-critical environments. Some examples of such limitations include a lack of interpretability, exposure to adversarial evasion, high false-positive rates, and degraded performance with the passage of time. This paper proposes a Secure Interpretable Deep Convolutional Network (SIDCN) that incorporates interpretability into the learning process. In contrast to conventional black-box models and post-hoc methods for providing explanations for the behavior of model predictions, SIDCN co-optimizes the accuracy of malware detection and the stability of explanatory outputs. The proposed approach employs a method for enforcing explanation-consistency regularization that allows for the generation of stable and robust explanatory outputs under adversarial perturbations. Additionally, the instability of explanatory outputs has been used as an additional signal to identify behavior that may be abnormal or evasive. Results from both an experimental analysis and real-world attack case studies demonstrate that the proposed SIDCN yields enhanced trustworthiness, robustness and operational effectiveness compared with conventional ML/DL-based malware detection systems and is, therefore, applicable within real-time security scenarios. Keywords Malware Detection, Interpretable Deep Learning, Cybersecurity, Adversarial Attacks, Explainable AI

  • New
  • Research Article
  • 10.3390/info17030281
The Evolution of Visualization Technologies in Healthcare: A Bibliometric Analysis of Studies Published from 1994 to 2025
  • Mar 11, 2026
  • Information
  • Fangzhong Cheng + 2 more

Healthcare visualization has become a crucial approach for interpreting complex medical data, supporting informed clinical decision-making, and enhancing public health management. However, existing reviews tend to focus on specific technologies or application scenarios, offering limited insight into the field’s overall knowledge structure, developmental trajectory, and interdisciplinary integration. To address this gap, this study systematically reviews 1121 publications from 1994 to 2025 indexed in the Web of Science Core Collection. By combining bibliometric analysis with qualitative assessment, it maps the field’s evolution and underlying research paradigms. The findings reveal a clear shift from early innovation in technical tools toward the realization of clinical value, giving rise to an integrated research system that connects technology, data, clinical practice, and public health. Recent research has progressed beyond initial explorations of medical imaging, standalone devices, and isolated techniques, moving instead toward core domains such as immersive medical visualization, medical data visualization and analytics, health information systems and decision support, AI-assisted epidemic prediction and diagnosis, and integrated IoT-based healthcare frameworks. Looking ahead, an assessment of future trends suggests that, among other directions, the deep integration of explainable artificial intelligence (XAI) with visualization analysis, the development of IoT-driven real-time interactive systems, and the extension of visualization-enabled services from clinical applications toward inclusive population-level health coverage represent core driving forces for the future development of this field. These insights offer strategic guidance for future research, inform the design principles of next-generation visualization systems, and provide new models of interdisciplinary collaboration. The results also offer evidence-based support for health resource planning, technological innovation, and policy formulation.

  • New
  • Research Article
  • 10.46647/8k0ngh53
Improving hospital resource management using explainable LOS prediction model
  • Mar 11, 2026
  • Research Digest on Engineering Management and Social Innovations
  • K.Sai Laxmi Snigdha + 2 more

Effective management of hospital resources is critical in delivering high-quality care while keeping costs under control. One aspect of resource management involves the understanding of the length of stay of patients, referred to as Length of Stay (LOS). Understanding LOS helps the hospital manage its patient population more effectively and ensure efficient and effective care. Unfortunately, traditional statistical models fail to effectively deal with the intricacies of the data obtained from the hospital’s electronic records. This research aims to introduce an effective and efficient framework for making LOS predictions using the power of machine learning models such as XGBoost and LSTM. To ensure the model’s predictions are accurate and reliable, the framework has been integrated with the capabilities of explainable AI, which helps in understanding the decision-making process of the model, achieved with the integration of SHAP. The framework has been developed with two primary dashboards: Admin Dashboard and Patient Dashboard. The Admin Dashboard has been created with the primary objective of assisting the hospital’s staff in effectively utilizing the capabilities of the framework.On the other hand, the Patient Dashboard gives patients a sense of empowerment by allowing them to have a safe place to store medical records, view test results, monitor recovery progress, schedule appointments, and even communicate with an AI-based health assistant.The integration of complex analytics, transparent AI explanations, visualization tools, and patient-centric tools is intended to enhance decision-making in hospitals, patient engagement, and efficient resource management in healthcare.

  • New
  • Research Article
  • 10.1007/s10462-026-11518-5
Beyond the black box: lessons in explainability from AI in mammography
  • Mar 11, 2026
  • Artificial Intelligence Review
  • Andrea Ciardiello + 5 more

Abstract With AI already in clinical use, mammography serves as a critical test-bed for the challenges and potential of medical AI. However, its progress is hampered by the ‘black box’ nature of current AI algorithms, limiting clinician trust and transparency. This review analyses the field of Explainable AI (XAI) as a solution, examining its motivations, methods, and metrics. We find the field is dominated by post-hoc saliency methods that provide plausible but not necessarily faithful explanations of AI decision-making. This focus has led to an evaluation gap, where localization accuracy is used as a proxy for explanatory quality without verifying the model’s true reasoning. Inherently interpretable models that could offer more faithful insights are rarely implemented, and a lack of human-centred studies further obscures the clinical utility of current XAI techniques. We argue that for AI in mammography to realize its full potential, the field must urgently shift focus from creating plausible explanations to developing and validating inherently interpretable systems that provide faithful, clinically meaningful insights.

  • New
  • Research Article
  • 10.3389/fbinf.2026.1760987
An explainable-AI framework reveals novel lncRNAs specific for breast cancer subtypes
  • Mar 10, 2026
  • Frontiers in Bioinformatics
  • Jai Chand Patel + 2 more

Background Long non-coding RNAs (lncRNAs) have emerged as important regulators in cancer biology; yet their potential for cancer subtyping remains underexplored particularly in the context of large-scale, multi-class supervised classification frameworks, due to limited publicly available data or their use only as auxiliary features in classification tasks. Methods In this study, we utilized an expansive set of 7,177 lncRNAs obtained from 1,021 breast cancer (BRCA) transcriptomics datasets for subtyping using an explainable artificial intelligence (AI) framework. lncRNA, mRNA, and miRNA features were used to build machine learning (ML) models individually and in combination. Four ML classifiers: Naïve Bayes, Random Forest, Artificial Neural Network, and XGBoost were employed to evaluate subtype classification performance. Results Using lncRNAs alone, XGBoost demonstrated strong performance with an accuracy of 89.2% and AUROC of 0.99. Addition of miRNA or mRNA features to lncRNA marginally improved the accuracy to 90.8% and 92.2%, respectively, while using all the three features together provided no further gain. A sequential key feature identification pipeline (ANOVA, Boruta, SHAP) has identified interpretable subtype-specific biomarker panels, yielding 119, 66, 54, and 24 unique features for Luminal A, Luminal B, HER2+, and Basal subtypes, respectively. Further lncRNA characterization followed by survival analysis revealed significant subtype-specific novel lncRNAs, including CUFF.25255 (LumA), CUFF.20237 and CUFF.3888 (LumB), CUFF.22414 (HER2+), and CUFF.26607 and CUFF.1961 (Basal). Conclusion Our findings highlight the diagnostic and biomarker discovery potential of lncRNAs, and the explainable-AI framework implemented here provides a systematic large-scale evaluation of lncRNA-only and integrative models for multi-class BRCA subtyping for BRCA subtyping and can be adopted to other cancers using the existing cancer transcriptomics data in the public databases.

  • New
  • Research Article
  • 10.1002/mar.70129
Transparency Matters: Psychological Ownership and Trust as Mediators of Explainable Artificial Intelligence Effectiveness
  • Mar 10, 2026
  • Psychology & Marketing
  • Suresh Malodia + 3 more

ABSTRACT AI‐based recommender systems shape many consumer decisions, but users often have limited information about why a recommendation is presented. This paper examines how explainable AI (XAI) design influences consumers' follow‐through, and when these effects are stronger. We conceptualize XAI design along two dimensions: explanation level (high vs. low diagnostic detail) and explanation type (process‐oriented vs. outcome‐oriented), and we examine boundary conditions across recommendation context (product vs. content). Four scenario‐based, between‐subjects experiments were conducted with Prolific participants who reported familiarity with recommendation systems (total N = 1080). Study 1 establishes the baseline effect: high (vs. low) explanation level increases intention to follow the recommendation, and the effect is robust under divided attention. Study 2 shows that the benefit of higher explanation level is context‐dependent, with stronger effects in content recommendations than in product recommendations. Study 3 shows that explanation type also shapes the effect of explanation level on follow‐through, with process‐oriented explanations producing a larger advantage for high (vs. low) explanation level than outcome‐oriented explanations. Study 4 tests the proposed mechanisms in a 2 × 2 × 2 design and finds that explanation level affects follow‐through primarily through trust and psychological ownership, with these indirect effects stronger in content (vs. product) contexts and under process‐oriented (vs. outcome‐oriented) explanations. Together, the findings specify how explanation level, explanation type, and context jointly determine when XAI increases follow‐through, and they identify trust and psychological ownership as mechanisms through which explanation design translates into consumer action.

  • New
  • Research Article
  • 10.1186/s13677-026-00878-6
A trustworthy cybersecurity model for transparent cyberattack detection using Bald Eagle Search tuned XGBoost and explainable AI
  • Mar 10, 2026
  • Journal of Cloud Computing
  • Shakti Kundu + 7 more

A trustworthy cybersecurity model for transparent cyberattack detection using Bald Eagle Search tuned XGBoost and explainable AI

  • New
  • Research Article
  • 10.1038/s41598-026-42335-z
Environmental education as a means of combating growing environmental pollution: an optimized- explainable artificial intelligence (XAI) approach.
  • Mar 9, 2026
  • Scientific reports
  • Osama Abduljalil Mohammad Hamad + 2 more

This work aimed at the use and understanding the impact of education in solving the growing environmental pollution and radiation exposure, which are both attributed to natural phenomena and human activities. It's a case study of two different universities in Libya namely; Omar Al-Mukhtar University, of Natural Resources and Environmental Sciences and Qubba Branch, University of Derna, Libya that are willing to utilize their knowledge in mitigating and combating environmental pollution. The total population of students studying environmental science and environmental education in these universities is 425, whereby, 402 students responded to the questionnaire used in the current study. This questionnaire comprises of four sections; socio-demographic section, knowledge, concern, willingness and behavior. Whereby; knowledge/environmental education was considered as the dependent variable while the other variables are considered as the independent variables. Descriptive statistics of the data using graphical representation of the obtained results demonstrates that 82.2% of the students respond with 5 and above (on a scale of 1 to 10), indicating that they know the major environmental pollution. Also, 45% of the students respond with 9 and 10 in demonstrating that they have knowledge on the major causes of environmental pollution. Furthermore, 72.2% of the responders responds with 6 and above to indicate that they know the major solutions for environmental pollution and based on this answers, interpretable artificial intelligence was used to determine the impacts of the independent variables on the targets. Overall, the performance results demonstrated that GPR-BO-M2 showed the highest performance among all the combinations used in modelling stage with R2-values = 0.951/0.937, RMSE = 0.684/0.651, MSE = 0.467/0.424 and MAE = 0.263/0.232. Hence, the results obtained in this work can be utilized by students, educationist, policy makers and experts in understanding and mitigating environmental pollution.

  • New
  • Research Article
  • 10.3390/earth7020044
A Comprehensive Review of Machine Learning and Deep Learning Methods for Flood Inundation Mapping
  • Mar 9, 2026
  • Earth
  • Abinash Silwal + 6 more

Flood inundation mapping (FIM) is essential in disaster risk management, infrastructure planning, and climate adaptation. Traditional hydrodynamic models, such as the Hydrologic Engineering Center’s River Analysis System (HEC-RAS) and LISFLOOD-Floodplain (LISFLOOD-FP), provide physically interpretable flood simulations but are often data- and computation-intensive and difficult to scale across regions. In recent years, machine learning (ML) and deep learning (DL) approaches have emerged as data-driven alternatives that leverage remote sensing observations, digital elevation models (DEMs), and hydro-climatic datasets to enable scalable and near-real-time flood mapping. Our review synthesizes recent advances in ML-based flood inundation mapping, categorizing methods into traditional machine learning techniques (e.g., Random Forest (RF), Support Vector Machines (SVM), Gradient Boosting (GB)), deep learning architectures (e.g., Convolutional Neural Networks (CNNs), U-Net, Long Short-Term Memory networks (LSTM)), and emerging hybrid and physics-informed frameworks. We evaluate model performance across flood extent and flood depth estimation tasks, highlighting strengths, limitations, and common benchmarking practices reported in the literature. The review identifies key challenges related to model interpretability, data bias, transferability, and regulatory acceptance, and highlights recent progress in explainable artificial intelligence (XAI), uncertainty-aware modeling, and physics-informed learning as pathways toward operational adoption. By unifying terminology, performance metrics, and methodological comparisons, this review provides a coherent framework for advancing trustworthy, scalable, and decision-relevant flood inundation mapping under increasing climate-driven flood risk.

  • New
  • Research Article
  • 10.3390/ai7030099
Epistemic Agency in the Age of Large Language Models: Design Principles for Knowledge-Building AI
  • Mar 9, 2026
  • AI
  • Earl Woodruff + 1 more

Introduction: Large language models (LLMs) are increasingly employed as cognitive aids in research and professional inquiry, yet their fluent outputs are frequently regarded as authoritative knowledge. We contend that this practice signifies a fundamental epistemic misalignment. Methods/Approach: Building on Peirce’s theory of inquiry, Sellars’ concept of the space of reasons, Stanovich’s tripartite model of cognition, and knowledge-building theory, we develop a conceptual framework for analyzing epistemic agency in human–LLM collaboration. Results/Argument: We demonstrate that LLM outputs fail to satisfy the conditions for knowledge because they lack reflective regulation, resistance to revision, and normative commitment. While LLMs display strong autonomous and algorithmic abilities (e.g., pattern recognition and hypothesis development), reflective control remains a distinctly human function. This asymmetry supports a principled division of epistemic labour and motivates the concept of the Knowledge-Building Partner (KBP): an AI system designed to support inquiry without claiming epistemic authority. Discussion/Implications: We identify prompt-, system-, and model-level design requirements and introduce a triangulated framework for operationalizing epistemic agency through explainable AI, discourse analysis, and rational-thinking measures. These contributions collectively reposition LLM limitations as epistemic design challenges rather than technical issues.

  • New
  • Research Article
  • 10.3389/fmed.2026.1764292
Biochip-simulated genotype signals enable accurate and interpretable AMR prediction via machine learning
  • Mar 9, 2026
  • Frontiers in Medicine
  • Zetian Fu

Background Antimicrobial resistance (AMR) is an escalating global health crisis, driven by the rapid evolution of resistant pathogens and the limitations of traditional diagnostic methods. Current approaches such as culture-based techniques are time-intensive, while molecular methods demand specialized infrastructure. Objective This study aims to develop a smart pathogen sensing framework using biochip-simulated genotypic signals combined with machine learning (ML) and explainable AI. The goal is to accurately predict AMR profiles while enabling model interpretability and personalized feedback through Agentic AI. Methods From a publicly available dataset of over 400,000 real Salmonella enterica isolates, 10,000 samples were randomly selected, and biochip-like analog signals were synthetically generated from their AMR genotype profiles. KMeans clustering was employed for unsupervised subtype discovery, while supervised models including Random Forest, XGBoost, and a Voting Classifier were trained using fivefold stratified cross-validation. Model explainability was achieved via SHAP values, and Rule based recommendation system was designed to convert predictions into actionable, patient-level insights. Results The proposed Voting Classifier achieved superior multi-class prediction performance, with high accuracy, precision, recall, F1-score, and AUC across diverse resistance profiles. UMAP visualizations and silhouette scores confirmed robust clustering, while SHAP interpretation enhanced transparency by identifying key resistance genes. A rule-based recommendation system translated SHAP-ranked gene contributions into context-specific clinical insights, improving interpretability and practical usability. Comparative analysis with state-of-the-art studies highlighted the novelty and superiority of our biochip-integrated, explainable pipeline. Conclusion This study presents a scalable, proof-of-concept diagnostic framework that integrates simulated biochip genotypes, interpretable ML models, and a rule-based recommendation system. By bridging predictive accuracy with actionable insights, the framework offers a pathway toward a potential pathway toward clinically relevant AMR diagnostics, advancing both computational innovation and practical decision support.

  • New
  • Research Article
  • 10.2196/86960
AI-Enabled Personalization of Semaglutide Therapy in Type 2 Diabetes: Systematic Review With an Integration Framework.
  • Mar 9, 2026
  • JMIR AI
  • Ghinwa Barakat + 4 more

Type 2 diabetes mellitus (T2D) is a rapidly growing global health concern requiring innovative treatment methods. Ozempic (semaglutide), a glucagon-like peptide-1 receptor agonist, has proven consistent effectiveness in lowering blood glucose levels, supporting weight loss, and minimizing cardiovascular complications. In parallel, artificial intelligence (AI) elevates diabetes care yet complements these efforts by converting raw data from wearable devices, electronic health records, and medical imaging into practical insights for efficient, tailored, and customized treatment plans. The objective of this systematic review is to examine current evidence of AI-driven methods to optimize Ozempic-based T2D therapy. A total of 18 peer-reviewed articles were identified, revealing four dominant thematic clusters: (1) patient stratification and risk prediction, (2) AI-enhanced imaging for body composition changes, (3) cardiovascular and metabolic risk assessment, and (4) personalized AI-driven dosage. Across multiple metrics, such as glycated hemoglobin reduction, weight loss, cardiovascular benefits, and adverse event mitigation, AI-based approaches outperformed standard fixed-dose regimens. A theoretical framework is proposed for AI-Ozempic integration, with continuous data collection, AI processing, clinical decision support, real-time support, and real-time feedback and modeling iteration refinement cycles. Significant gaps remain a persistent challenge, including the need for large-scale randomized controlled trials, longer follow-up periods, explainable AI models, regulatory validation, and practical strategies for routine clinical implementation. The findings emphasize the AI's potential to transform semaglutide therapy while delineating important paths for future research.

  • New
  • Research Article
  • 10.3390/jcp6020051
Beyond Semantic Noise: A Dual-Verification Framework for Thai–English Code-Mixed Malicious Script Detection via XAI-Guided Selective Integration
  • Mar 9, 2026
  • Journal of Cybersecurity and Privacy
  • Prasert Teppap + 3 more

In the evolving cybersecurity landscape, detecting Thai-English code-mixed malicious scripts within high-trust domains such as governmental and academic portals presents a significant defensive challenge. While Transformer-based architectures excel in semantic parsing, they often exhibit ‘Structural Bias,’ misinterpreting the high-entropy syntax of benign legacy HyperText Markup Language (HTML) as malicious obfuscation due to inherent ‘Attention Deficit’ in token-limited models. To address this, we propose an Explainable AI (XAI)-Driven Hybrid Architecture grounded in a ‘Selective Integration’ strategy. Unlike traditional hybrid models, our framework mathematically formalizes the fusion process by synergizing context-aware WangChanBERTa embeddings with orthogonal structural statistics through Dempster-Shafer Theory and Conditional Mutual Information (CMI). The proposed model was validated on a high-fidelity corpus, achieving a state-of-the-art F1-score of 0.9908, significantly outperforming standalone Transformers, Random Forest, and unsupervised baselines. XAI diagnostics revealed a ‘Dual-Validation’ mechanism where structural features act as an epistemic anchor. This mechanism effectively triggers a ‘Semantic Veto’ to filter hallucinations caused by benign complexity, achieving a remarkably low False Positive Rate (FPR) of 0.0116. Our findings demonstrate that hybridization is most effective when engineered features provide mathematical orthogonality to semantic embeddings. This work offers a robust, theoretically grounded framework for securing critical digital infrastructures in low-resource linguistic environments.

  • New
  • Research Article
  • 10.1016/j.actatropica.2026.108044
Fine-scale mapping of Oncomelania hupensis habitats in eastern China using multi-season Sentinel-2 imagery and semi-supervised deep learning.
  • Mar 7, 2026
  • Acta tropica
  • Kedi Dai + 7 more

Fine-scale mapping of Oncomelania hupensis habitats in eastern China using multi-season Sentinel-2 imagery and semi-supervised deep learning.

  • New
  • Research Article
  • 10.1007/s11517-026-03530-2
ADBrainNet: a deep neural network for Autism Spectrum Disorder (ASD) and Attention Deficit and Hyperactivity Disorder (ADHD) classification using resting-state fMRI images based on explainable artificial intelligence.
  • Mar 5, 2026
  • Medical & biological engineering & computing
  • Xinyao Yi + 3 more

Autism Spectrum Disorder (ASD) and Attention Deficit and Hyperactivity Disorder (ADHD) are two psychiatric disorders frequently encountered in children. ADHD is further categorized into three subtypes. The diagnostic processes for these conditions are complex and often prone to misclassification. We proposed a lightweight deep neural network, ADBrainNet, to differentiate ASD, ADHD combined, ADHD hyperactive/impulsive, ADHD inattentive and neurotypical individuals. Our methodology was benchmarked against prevalent ImageNet transfer learning methods, including AlexNet, MobileNet, ResNet18, and Xception, for training on resting-state fMRI images sourced from ABIDE and ADHD-200 datasets. ADBrainNet achieved superior performance on the independent external testing set through five-fold cross-validation, with a mean (± standard deviation) accuracy, precision, recall, and F1 score of 61.87% (± 5.59%), 65.72% (± 6.98%), 61.87% (± 5.59%), and 62.50% (± 5.78%), respectively. Furthermore, the explainable artificial intelligence algorithm LIME was employed to explore the most significant features during ADBrainNet's decision process. Our model provides an interpretable computational framework for neuroimaging-based classification between ASD and ADHD subtypes. This approach may inform future research and, upon further validation and comparison with clinician performance, could potentially aid in patient assessment, stratification, and management of psychiatric disorders.

  • New
  • Research Article
  • 10.1080/07366981.2026.2637772
AI-driven auditing: trends, themes, and research trajectories
  • Mar 5, 2026
  • EDPACS
  • Aidi Ahmi + 1 more

ABSTRACT This study examines the evolution of artificial intelligence (AI) in auditing by systematically synthesizing 26 years of global scholarship to address conceptual fragmentation in the field. Guided by the PRISMA protocol, it analyses 269 Scopus-indexed journal articles, harmonized using OpenRefine and biblioMagika, and examined through performance analysis, co-occurrence mapping, and life-cycle modeling with biblioMagika, VOSviewer, and Biblioshiny. Integrating behavioral, technological, and governance perspectives, the study clarifies the domain’s intellectual foundations, thematic structure, and temporal development. Findings show rapid expansion since 2019, reflecting a shift from early rule-based and expert systems toward advanced machine learning, explainable AI, and generative AI applications. Six thematic clusters organize the literature: computational audit analytics; governance and responsible AI; AI–audit integration; blockchain-enabled audit evidence; automation and audit quality; and analytics-driven continuous auditing. Temporal evidence indicates an increasing normative orientation, with stronger emphasis on transparency, accountability, trust, and ethical governance. Life-cycle modeling suggests the field remains in a steep growth phase, indicating substantial scope for further theoretical and practical advancement. Although limited to Scopus-indexed journal articles, the results offer a foundation for research on auditor–AI interaction, behavioral effects of automation, governance mechanisms for AI assurance, and institutional variation in technology adoption. By providing a longitudinal science-mapping analysis, the study consolidates publication trends, intellectual structure, thematic evolution, and life-cycle forecasting, and identifies research opportunities likely to shape audit methodology, governance, and professional judgment in an era of intelligent systems.

  • New
  • Research Article
  • 10.21608/javs.2026.447909.1845
Mastitis Prediction by Explainable Artificial Intelligence Learning at the Initial Phase of Infection in Dairy Cows
  • Mar 5, 2026
  • Journal of Applied Veterinary Sciences
  • Seyed Abolghasem Rakhtala Rostami + 2 more

Mastitis Prediction by Explainable Artificial Intelligence Learning at the Initial Phase of Infection in Dairy Cows

  • New
  • Research Article
  • 10.1016/j.virol.2026.110863
Machine learning framework for early detection of polio outbreaks from acute flaccid paralysis surveillance data.
  • Mar 5, 2026
  • Virology
  • Honey Gemechu + 10 more

Machine learning framework for early detection of polio outbreaks from acute flaccid paralysis surveillance data.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2026 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers