A Semantic Web-Enabled Explainable AI Framework for Interoperable and Scalable Detection of Autism Spectrum Disorder

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Autism Spectrum Disorder (ASD) is a lifelong condition that affects communication, social interaction, and behavior. Artificial intelligence (AI) shows promise for early detection, but many models struggle with accuracy, scalability, and interpretability, limiting clinical use. To address these gaps, this paper proposes a semantic web–enabled explainable AI (XAI) framework for accurate and interoperable ASD diagnosis. The framework has three parts: (1) a semantic data integration layer that harmonizes heterogeneous datasets, (2) a scalable feature engineering process using MapReduce with the Binary Capuchin Search Algorithm (BCSA), and (3) interpretable classifiers enriched with SHAP for transparent predictions. Experiments on ASD datasets achieved about 87% accuracy, outperforming baselines by 7–10% and federated methods by 5%. Precision and F1 improved by 6–8%, while semantic integration enhanced interpretability and trust. By uniting semantic technologies with explainable ML, the framework ensures scalability and offers a reliable, transparent pathway toward clinically useful AI.

Similar Papers
  • Research Article
  • 10.1038/s41598-025-24662-9
RETRACTED ARTICLE: Bridging the gap: explainable ai for autism diagnosis and parental support with TabPFNMix and SHAP
  • Nov 19, 2025
  • Scientific Reports
  • Shimei Jiang

Autism Spectrum Disorder (ASD) is a complex neurodevelopmental condition that affects a growing number of individuals worldwide. Despite extensive research, the underlying causes of ASD remain largely unknown, with genetic predisposition, parental history, and environmental influences identified as potential risk factors. Diagnosing ASD remains challenging due to its highly variable presentation and overlap with other neurodevelopmental disorders. Early and accurate diagnosis is crucial for timely intervention, which can significantly improve developmental outcomes and parental support. This work presents a novel artificial intelligence (AI) and explainable AI (XAI)-based framework to enhance ASD diagnosis and provide interpretable insights for medical professionals and caregivers. The proposed framework leverages advanced classification models, specifically the TabPFNMix regressor, which is optimized for structured medical datasets. Unlike traditional machine learning methods, TabPFNMix demonstrates superior performance in capturing complex ASD-related patterns. To address the black-box nature of AI models, Shapley Additive Explanations (SHAP) is integrated to provide transparent and interpretable reasoning behind the model’s decisions, ensuring better understanding for clinicians and caregivers. Extensive experiments were conducted using a publicly available benchmark dataset, with performance evaluated through standard metrics such as accuracy, precision, recall, F1-score, and AUC-ROC. Comparative analysis with baseline models, including Random Forest, XGBoost, Support Vector Machine (SVM), and Deep Neural Networks (DNNs), demonstrates that TabPFNMix achieves the highest accuracy (91.5%), surpassing XGBoost (87.3%) by 4.2 percentage points. Additionally, it attains superior recall (92.7%), precision (90.2%), F1-score (91.4%), and AUC-ROC (94.3%), ensuring both high diagnostic accuracy and robustness in real-world ASD screening. An ablation study highlights the significance of feature selection and preprocessing, revealing that omitting key features or preprocessing steps (such as normalization and missing data imputation) significantly degrades performance. Furthermore, SHAP-based feature importance analysis identifies social responsiveness scores, repetitive behavior scales, and parental age at birth as the most influential factors in ASD diagnosis. These insights align with medical literature, reinforcing the reliability of the model’s predictions and its applicability in clinical settings.

  • Research Article
  • Cite Count Icon 12
  • 10.1044/leader.ftr2.16012011.12
Assessing Diverse Students With Autism Spectrum Disorders
  • Jan 1, 2011
  • The ASHA Leader
  • Tina Taylor Dyches

Assessing Diverse Students With Autism Spectrum Disorders

  • Book Chapter
  • Cite Count Icon 3
  • 10.1016/b978-0-443-19096-4.00006-7
Chapter Twelve - Human AI: Explainable and responsible models in computer vision
  • Aug 25, 2023
  • Emotional AI and Human-AI Interactions in Social Networking
  • Kukatlapalli Pradeep Kumar + 3 more

Chapter Twelve - Human AI: Explainable and responsible models in computer vision

  • Research Article
  • Cite Count Icon 88
  • 10.1002/mp.15359
A review of explainable and interpretable AI with applications in COVID-19 imaging.
  • Dec 7, 2021
  • Medical physics
  • Jordan D Fuhrman + 5 more

The development of medical imaging artificial intelligence (AI) systems for evaluating COVID‐19 patients has demonstrated potential for improving clinical decision making and assessing patient outcomes during the recent COVID‐19 pandemic. These have been applied to many medical imaging tasks, including disease diagnosis and patient prognosis, as well as augmented other clinical measurements to better inform treatment decisions. Because these systems are used in life‐or‐death decisions, clinical implementation relies on user trust in the AI output. This has caused many developers to utilize explainability techniques in an attempt to help a user understand when an AI algorithm is likely to succeed as well as which cases may be problematic for automatic assessment, thus increasing the potential for rapid clinical translation. AI application to COVID‐19 has been marred with controversy recently. This review discusses several aspects of explainable and interpretable AI as it pertains to the evaluation of COVID‐19 disease and it can restore trust in AI application to this disease. This includes the identification of common tasks that are relevant to explainable medical imaging AI, an overview of several modern approaches for producing explainable output as appropriate for a given imaging scenario, a discussion of how to evaluate explainable AI, and recommendations for best practices in explainable/interpretable AI implementation. This review will allow developers of AI systems for COVID‐19 to quickly understand the basics of several explainable AI techniques and assist in the selection of an approach that is both appropriate and effective for a given scenario.

  • Research Article
  • Cite Count Icon 3
  • 10.1044/leader.ftr1.17012012.10
Come Play With Me
  • Jan 1, 2012
  • The ASHA Leader
  • Howard Goldstein + 1 more

Come Play With Me

  • Research Article
  • Cite Count Icon 15
  • 10.1016/j.neuron.2006.04.021
Pten and the Brain: Sizing up Social Interaction
  • May 1, 2006
  • Neuron
  • Joy M Greer + 1 more

Pten and the Brain: Sizing up Social Interaction

  • Research Article
  • Cite Count Icon 6
  • 10.3390/math12223515
Innovative Approach to Detecting Autism Spectrum Disorder Using Explainable Features and Smart Web Application
  • Nov 11, 2024
  • Mathematics
  • Mohammad Abu Tareq Rony + 6 more

Autism Spectrum Disorder (ASD) is a complex developmental condition marked by challenges in social interaction, communication, and behavior, often involving restricted interests and repetitive actions. The diversity in symptoms and skill profiles across individuals creates a diagnostic landscape that requires a multifaceted approach for accurate understanding and intervention. This study employed advanced machine-learning techniques to enhance the accuracy and reliability of ASD diagnosis. We used a standard dataset comprising 1054 patient samples and 20 variables. The research methodology involved rigorous preprocessing, including selecting key variables through data mining (DM) visualization techniques including Chi-Square tests, analysis of variance, and correlation analysis, along with outlier removal to ensure robust model performance. The proposed DM and logistic regression (LR) with Shapley Additive exPlanations (DMLRS) model achieved the highest accuracy at 99%, outperforming state-of-the-art methods. eXplainable artificial intelligence was incorporated using Shapley Additive exPlanations to enhance interpretability. The model was compared with other approaches, including XGBoost, Deep Models with Residual Connections and Ensemble (DMRCE), and fast lightweight automated machine learning systems. Each method was fine-tuned, and performance was verified using k-fold cross-validation. In addition, a real-time web application was developed that integrates the DMLRS model with the Django framework for ASD diagnosis. This app represents a significant advancement in medical informatics, offering a practical, user-friendly, and innovative solution for early detection and diagnosis.

  • Research Article
  • Cite Count Icon 3
  • 10.1044/leader.ftr1.21042016.44
Early Signs
  • Apr 1, 2016
  • The ASHA Leader
  • Nancy Volkers

Early Signs

  • Research Article
  • 10.1177/09727531251369286
Dealing with Autism Spectrum Disorders: Journey from Traditional Methods to Artificial Intelligence.
  • Sep 8, 2025
  • Annals of neurosciences
  • Anjali Sahai

World Health Organisation (WHO) in 2024 identified that approximately one in 100 children globally has autism spectrum disorder (ASD). ASD is a collection of neurodevelopmental disorders that impact a person's ability to socially interact and communicate, which can typically be noticed in early childhood. While 'autism' as a term was initially used for schizophrenic patients, later psychiatrists Dr. Kanner and paediatrician Dr. Asperger introduced it as a syndrome in children with behavioural differences in social interaction and communication with restrictive and repetitive interests. In today's time, the umbrella term 'ASDs' is used to describe a clinically heterogeneous group of neurodevelopmental disorders (NDDs). To examine the role of traditional approaches and the potential effectiveness of artificial intelligence (AI) methods in dealing with ASDs for improving the accuracy in its diagnosis and treatment. The study adopts a narrative review approach to understand the application of AI in ASD. For this purpose, around a hundred research articles were selected from the years 2010-2024. Inclusion and exclusion criteria were identified. The review is organised and grounded on the medical treatment, occupational remedy, vocational remedy, psychology, family remedy and recuperation engineering. The results show the undisputed role of AI and its ability to identify early indicators of autism, in accordance with the UN Sustainable Development Goal 3 (Good Health and Well-being) and Goal 16 (Peace, Justice and Strong Institutions). Further, healthcare sectors which are using a variety of AI analyses on data sources, genetics, neuroimaging, behavioural patterns and electronic medical records are able to early detect for individualised evaluation of ASD. The significance of timely interventions with the help of machine learning (ML) algorithms demonstrates high accuracy in differentiating ASD from neurotypical development and other developmental disorders.AI-driven therapeutic interventions expand social interactions and communication skills in people with ASD in the form of virtual reality-based training, augmentative communication systems and robot-assisted therapies. Thus, the future of AI in ASD holds promise for improving diagnostic accuracy, implementing telehealth platforms and customising treatment plans, despite obstacles such as data privacy and interpretability.

  • Supplementary Content
  • 10.3390/healthcare13243208
Unveiling the Algorithm: The Role of Explainable Artificial Intelligence in Modern Surgery
  • Dec 8, 2025
  • Healthcare
  • Sara Lopes + 4 more

Artificial Intelligence (AI) is rapidly transforming surgical care by enabling more accurate diagnosis and risk prediction, personalized decision-making, real-time intraoperative support, and postoperative management. Ongoing trends such as multi-task learning, real-time integration, and clinician-centered design suggest AI is maturing into a safe, pragmatic asset in surgical care. Yet, significant challenges, such as the complexity and opacity of many AI models (particularly deep learning), transparency, bias, data sharing, and equitable deployment, must be surpassed to achieve clinical trust, ethical use, and regulatory approval of AI algorithms in healthcare. Explainable Artificial Intelligence (XAI) is an emerging field that plays an important role in bridging the gap between algorithmic power and clinical use as surgery becomes increasingly data-driven. The authors reviewed current applications of XAI in the context of surgery—preoperative risk assessment, surgical planning, intraoperative guidance, and postoperative monitoring—and highlighted the absence of these mechanisms in Generative AI (e.g., ChatGPT). XAI will allow surgeons to interpret, validate, and trust AI tools. XAI applied in surgery is not a luxury: it must be a prerequisite for responsible innovation. Model bias, overfitting, and user interface design are key challenges that need to be overcome and will be explored in this review to achieve the integration of XAI into the surgical field. Unveiling the algorithm is the first step toward a safe, accountable, transparent, and human-centered surgical AI.

  • Research Article
  • 10.1177/20552076251390281
An interpretable multimodal deep learning framework for Alzheimer's disease diagnosis
  • May 1, 2025
  • Digital Health
  • Abdullah Alsaleh

BackgroundAlzheimer's disease (AD) presents a significant and escalating public health concern, with early-stage neurodegeneration often going undetected using current diagnostic procedures. Medical imaging modalities, particularly structural magnetic resonance imaging (MRI) and functional positron emission tomography (PET), provide complementary insights into the anatomical and metabolic changes associated with AD. Despite their potential, the integration of these imaging techniques into a unified, explainable artificial intelligence (AI) framework remains limited.ObjectivesThis study aims to develop and evaluate NeuroFusion-ADNet, a novel AI model that effectively combines structural and functional imaging data to improve diagnostic accuracy and clinical interpretability in AD detection.MethodsNeuroFusion-ADNet is a dual-path deep learning model that jointly processes co-registered MRI and PET slices for simultaneous region-of-interest segmentation and diagnostic classification. The model features modality-specific encoders for structural and functional feature extraction, a bi-directional cross-attention fusion layer and a segmentation-informed classification module. The framework was trained and evaluated using the Alzheimer's Disease Neuroimaging Initiative dataset, comprising 381 subjects across normal control, mild cognitive impairment) and AD categories. Performance was benchmarked against standard architectures, including ResNet152, U-Net++, and multimodal convolutional neural networks (CNNs). Recently, combining CNNs and attention mechanisms has shown highly effective results in medical image analysis. Therefore, our model integrates explainability features, including attention heatmaps and Local Interpretable Model-Agnostic Explanations.ResultsNeuroFusion-ADNet achieved a classification accuracy of 99.48% and a Dice coefficient of 0.985, significantly outperforming existing baselines. Attention-based visualizations confirmed that the model consistently focuses on clinically relevant brain regions such as the hippocampus, entorhinal cortex and basal ganglia. Extensive ablation studies validated the contributions of each architectural component.ConclusionThis work introduces a clinically promising multimodal AI framework that enhances diagnostic accuracy while maintaining transparency through explainable techniques. NeuroFusion-ADNet sets a foundation for the development of efficient, interpretable and deployable tools in the early diagnosis of AD.

  • Research Article
  • Cite Count Icon 1
  • 10.3389/fdgth.2025.1692517
Ethical and practical challenges of generative AI in healthcare and proposed solutions: a survey
  • Nov 17, 2025
  • Frontiers in Digital Health
  • Tina Tung + 2 more

BackgroundGenerative artificial intelligence (AI) is rapidly transforming healthcare, but its adoption introduces significant ethical and practical challenges. Algorithmic bias, ambiguous liability, lack of transparency, and data privacy risks can undermine patient trust and create health disparities, making their resolution critical for responsible AI integration.ObjectivesThis systematic review analyzes the generative AI landscape in healthcare. Our objectives were to: (1) identify AI applications and their associated ethical and practical challenges; (2) evaluate current data-centric, model-centric, and regulatory solutions; and (3) propose a framework for responsible AI deployment.MethodsFollowing the PRISMA 2020 statement, we conducted a systematic review of PubMed and Google Scholar for articles published between January 2020 and May 2025. A multi-stage screening process yielded 54 articles, which were analyzed using a thematic narrative synthesis.ResultsOur review confirmed AI’s growing integration into medical training, research, and clinical practice. Key challenges identified include systemic bias from non-representative data, unresolved legal liability, the “black box” nature of complex models, and significant data privacy risks. Proposed solutions are multifaceted, spanning technical (e.g., explainable AI), procedural (e.g., stakeholder oversight), and regulatory strategies.DiscussionCurrent solutions are fragmented and face significant implementation barriers. Technical fixes are insufficient without robust governance, clear legal guidelines, and comprehensive professional education. Gaps in global regulatory harmonization and frameworks ill-suited for adaptive AI persist. A multi-layered, socio-technical approach is essential to build trust and ensure the safe, equitable, and ethical deployment of generative AI in healthcare.ConclusionsThe review confirmed that generative AI has a growing integration into medical training, research, and clinical practice. Key challenges identified include systemic bias stemming from non-representative data, unresolved legal liability, the “black box” nature of complex models, and significant data privacy risks. These challenges can undermine patient trust and create health disparities. Proposed solutions are multifaceted, spanning technical (such as explainable AI), procedural (like stakeholder oversight), and regulatory strategies.

  • Research Article
  • 10.33022/ijcs.v14i5.4969
Evaluating the Impact of Artificial Intelligence Enhanced Augmented Reality Tools on Social Interaction in Learners with Autism Spectrum Disorder
  • Oct 23, 2025
  • The Indonesian Journal of Computer Science
  • Femi Elegbeleye + 3 more

Autism Spectrum Disorder (ASD) is a cognitive developmental condition characterized by persistent deficits in social communication and interaction, alongside restricted and repetitive patterns of behavior. The global prevalence of ASD is estimated at approximately 1% in the general population, with higher rates observed in specific demographic groups. Individuals with ASD often experience challenges in interpreting social cues, initiating interactions, and participating in group settings, which can impede their academic and social development. This study examines how Augmented Reality (AR) and Artificial Intelligence (AI)-based interventions can complement or improve the social communication skills and behavioral patterns of individuals with ASD. A systematic literature review (SLR) was conducted, focusing on peer-reviewed studies published between 2019 and 2024, to assess the efficacy and practicality of these technologies in educational environments. The analysis covers engagement of visual boards, smartphones, tablets, and AR glasses, which are increasingly integrated into pedagogical strategies to enhance the learning experiences of students with ASD. The results demonstrate that AI-enhanced AR-based interventions significantly outperformed traditional teaching methods, with notable improvements in social interaction (70% vs. 50%), emotional recognition (60% vs. 40%), engagement (80% vs. 55%), communication skills (75% vs. 45%), and behavioral outcomes (65% vs. 50%). These technologies appear to support the development of social skills by providing interactive, personalized, and visually enriched learning environments. The outcomes of this research highlight the potential of AI-enhanced AR to complement traditional teaching methods, offering valuable insights for educators, therapists, and policymakers seeking practical approaches to support learners with ASD. Further empirical research is recommended to validate these findings across diverse educational settings

  • Research Article
  • Cite Count Icon 2
  • 10.1176/appi.pn.2023.04.4.34
Special Report: Autism Spectrum Disorder and Inflexible Thinking—Affecting Patients Across the Lifespan
  • Apr 1, 2023
  • Psychiatric News
  • Eric Hollander + 1 more

Special Report: Autism Spectrum Disorder and Inflexible Thinking—Affecting Patients Across the Lifespan

  • Research Article
  • Cite Count Icon 6
  • 10.1016/j.jneumeth.2024.110315
Early detection of autism spectrum disorder using explainable AI and optimized teaching strategies
  • Nov 10, 2024
  • Journal of Neuroscience Methods
  • Sarah A Alzakari + 6 more

Early detection of autism spectrum disorder using explainable AI and optimized teaching strategies

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.