- New
- Research Article
- 10.3389/frai.2025.1661637
- Jan 6, 2026
- Frontiers in Artificial Intelligence
- Nina Moorman + 3 more
Introduction Patients with severe COVID-19 may require MV or ECMO. Predicting who will require interventions and the duration of those interventions are challenging due to the diverse responses among patients and the dynamic nature of the disease. As such, there is a need for better prediction of the duration and outcomes of MV use in patients, to improve patient care and aid with MV and ECMO allocation. Here we develop and examine the performance of ML models to predict MV duration, ECMO, and mortality for patients with COVID-19. Methods In this retrospective prognostic study, hierarchical machine-learning models were developed to predict MV duration and outcome prediction from demographic data and time-series data consisting of vital signs and laboratory results. We train our models on 10,378 patients with positive severe acute respiratory syndrome-related coronavirus (SARS-CoV-2) virus testing from Emory’s COVID CRADLE Dataset who sought treatment at Emory University Hospital between February 28, 2020, to January 24, 2022. Analysis was conducted between January 10, 2022, and April 5, 2024. The main outcomes and measures were the AUROC, AUPRC and the F-score for MV duration, need for ECMO, and mortality prediction. Results Data from 10,378 patients with COVID-19 (median [IQR] age, 60 [48–72] years; 5,281 [50.89%] women) were included. Overall MV class distributions for 0 days, 1–4 days, 5–9 days, 10–14 days, 15–19 days, 20–24 days, 25–29 days, and ≥30 days of MV were 8,141 (78.44%), 812 (7.82%), 325 (3.13%), 241 (2.32%), 153 (1.47%), 97 (0.93%), 87 (0.84%), and 522 (5.03%), respectively. Overall ECMO use and mortality rates were 15 (0.14%) and 1,114 (10.73%), respectively. On MV duration, ECMO use, and mortality outcomes, the highest-performing model reached weighted average AUROC scores of 0.873, 0.902, and 0.774, and the highest-performing model reached weighted average AUPRC scores of 0.790, 0.999, and 0.893. Conclusions and relevance Hierarchical ML models trained on vital signs, laboratory results, and demographic data show promise for the prediction of MV duration, ECMO use, and mortality in COVID-19 patients.
- New
- Research Article
- 10.3389/frai.2025.1703949
- Jan 6, 2026
- Frontiers in Artificial Intelligence
- Naisarg Patel + 5 more
Introduction The launch of DeepSeek, a Chinese open-source generative AI model, generated substantial discussion regarding its capabilities and implications. The r/deepseek subreddit emerged as a key forum for real-time public evaluation. Analyzing this discourse is essential for understanding the sociotechnical perceptions shaping the integration of emerging AI systems. Methods We analyzed 46,649 posts and comments from r/deepseek (January–May 2025) using a computational framework combining VADER sentiment analysis, Hartmann emotion classification, BERTopic for thematic modeling, hyperlink extraction, and directed network analysis. Data preprocessing included cleaning, normalization, and lemmatization. We also examined correlations between sentiment/emotion scores and dominant topics. Results Sentiment was predominantly positive (posts: 47.23%; comments: 44.26%), with neutral sentiment comprising ~30% of content. The most frequent emotion was neutrality, followed by surprise and fear, indicating ambivalent user reactions. Prominent topics included open-source AI models, DeepSeek usage, device compatibility, comparisons with ChatGPT, and censorship concerns. Hyperlink analysis indicated strong engagement with GitHub, Hugging Face, and DeepSeek’s own services. Network analysis revealed a fragmented but active community, depicting Open-Source AI Models as the most cohesive cluster. Discussion Community discourse framed DeepSeek as both a technical tool and a geopolitical issue. Enthusiasm centered on its performance, accessibility, and open-source nature, while concerns were voiced about censorship, data privacy, and potential ideological influence. The integrated analysis shows that collective perception emerged through decentralized, dialogic engagement, reflecting broader sociotechnical tensions related to openness, trust, and legitimacy in global AI development.
- New
- Research Article
- 10.3389/frai.2025.1724493
- Jan 6, 2026
- Frontiers in Artificial Intelligence
- Jie Zhang + 1 more
Introduction To address the challenges of data heterogeneity, strategic diversity, and process opacity in interpreting multi-agent decision-making within complex competitive environments, we have developed TRACE, an end-to-end analytical framework for StarCraft II gameplay. Methods This framework standardizes raw replay data into aligned state trajectories, extracts “typical strategic progressions” using a Conditional Recurrent Variational Autoencoder (C-RVAE), and quantifies the deviation of individual games from these archetypes via counterfactual alignment. Its core innovation is the introduction of a dimensionless deviation metric, |Δ|, which achieves process-level interpretability. This metric reveals “which elements are important” by ranking time-averaged feature contributions across aggregated categories (Economy, Military, Technology) and shows “when deviations occur” through temporal heatmaps, forging a verifiable evidence chain.. Results Quantitative evaluation on professional tournament datasets demonstrates the framework’s robustness, revealing that strategic deviations often crystallize in the early game (averaging 8.4% of match duration) and are frequently driven by critical technology timing gaps. The counterfactual generation module effectively restores strategic alignment, achieving an average similarity improvement of over 90% by correcting identified divergences. Furthermore, expert human evaluation confirms the practical utility of the system, awarding high scores for Factual Fidelity (4.6/5.0) and Causal Coherence (4.3/5.0) to the automatically generated narratives. Discussion By providing openaccess code and reproducible datasets, TRACE lowers the barrier to large-scale replay analysis, offering an operational quantitative basis for macro-strategy understanding, coaching reviews, and AI model evaluation.
- Research Article
- 10.3389/frai.2025.1659861
- Dec 19, 2025
- Frontiers in Artificial Intelligence
- Sherif Elmitwalli + 3 more
BackgroundThe proliferation of tobacco-related misinformation poses significant public health risks, requiring scalable solutions for credibility assessment. Traditional manual fact-checking approaches are resource-intensive and cannot match the pace of misinformation spread.ObjectiveTo develop and validate a proof-of-concept multi-agent AI pipeline for automated credibility assessment of tobacco misinformation claims, evaluating its performance against expert human reviewers.MethodsWe constructed a three-agent pipeline using OpenAI GPT-4.1 and the Crewai framework. The Serper API provided real-time evidence retrieval. The Content Analyzer classifies claims into four types: health impact, scientific assertion, policy, or statistical. The Scientific Fact Verifier queries authoritative sources (WHO, CDC, PubMed Central, Cochrane). The Health Evidence Assessor applies weighted scoring across five dimensions to assign 0–100 credibility scores on a five-level scale.ResultsThe framework achieved an MAE of 6.25 points against expert scores, a weighted Cohen’s κ of 0.68 (95% CI: 0.52–0.84) indicating substantial agreement, 70% exact category agreement, 95% adjacent-level agreement, and processed each claim in under 7 s—over 1,000 × faster than manual review.LimitationsWe validated our approach using 20 diverse tobacco claims through intensive expert review (2–4 h per claim). The system exhibited a conservative bias (+3.25 points, p = 0.03) and did not classify any claims as “Highly Unlikely” despite expert assignment of two claims to this category. This proof-of-concept demonstrates technical feasibility and substantial inter-rater agreement while identifying areas for calibration in future large-scale implementations.ConclusionOur proof-of-concept agentic AI pipeline demonstrates substantial agreement with expert assessments of tobacco-related claims while providing dramatic speed improvements. By combining zero-shot LLM reasoning, retrieval-grounded evidence verification, and a transparent five-level scoring schema, the system offers a practical tool for real-time misinformation monitoring in public health. This proof-of-concept establishes technical feasibility for automated tobacco misinformation assessment, with validation results supporting further development and larger-scale testing before operational deployment.
- Research Article
- 10.3389/frai.2025.1689727
- Dec 19, 2025
- Frontiers in Artificial Intelligence
- Mohammadreza Nehzati
IntroductionConventional artificial intelligence (AI) systems are limited by static architectures that require periodic retraining and fail to adapt efficiently to continuously changing data environments. To address this limitation, this research introduces a novel biologically inspired computing paradigm that supports perpetual learning through continuous data assimilation and autonomous structural evolution. The proposed system aims to emulate biological cognition, enabling lifelong learning, self-repair, and adaptive evolution without human intervention.MethodsThe system is built upon dynamic cognitive substrates that continuously absorb and map real-time information streams. These substrates eliminate the traditional distinction between training and inference phases, supporting uninterrupted learning. Quantum-inspired uncertainty management ensures computational robustness, while biomimetic self-healing protocols maintain structural integrity during adaptive changes. Additionally, micro-optimization via fractal propagation enhances mathematical specialization across hierarchical computational levels. Recursive learning mechanisms allow the architecture to refine its functionality based on its own outputs.ResultsExperimental validation demonstrates that the proposed architecture sustains effective learning across diverse, heterogeneous data domains. The system autonomously restructures itself, maintaining stability while improving performance in dynamic environments. Specialized cognitive processing units, analogous to biological organs, perform distinct functions and collectively enhance adaptive intelligence. Notably, the system prioritizes and retains valuable information through evolution, reflecting biological memory consolidation patterns.DiscussionThe findings reveal that continuous, self-modifying AI architectures can outperform traditional models in non-stationary conditions. By integrating quantum uncertainty control, biomimetic repair mechanisms, and fractal-based optimization, the system achieves resilient, autonomous learning over time. This approach has far-reaching implications for developing lifelong-learning machines capable of dynamic adaptation, self-maintenance, and evolution paving the way toward fully autonomous, continuously learning artificial organisms.
- Research Article
- 10.3389/frai.2025.1706566
- Dec 19, 2025
- Frontiers in Artificial Intelligence
- Johan Pena-Campos + 5 more
Black-box models, particularly Support Vector Machines (SVM), are widely employed for identifying dynamic systems due to their high predictive accuracy; however, their inherent lack of transparency hinders the understanding of how individual input variables contribute to the system output. Consequently, retrieving interpretability from these complex models has become a critical challenge in the control and identification community. This paper proposes a post-hoc functional decomposition algorithm based on Non-linear Oblique Subspace Projections (NObSP). The method decomposes the output of an already identified SVM regression model into a sum of partial (non)linear dynamic contributions associated with each input regressor. By operating in the non-linear feature space, NObSP utilizes oblique projections to mitigate cross-contributions from correlated regressors. Furthermore, an efficient out-of-sample extension is introduced to improve scalability. Numerical simulations performed on benchmark Wiener and Hammerstein structures demonstrate that the proposed method effectively retrieves the underlying partial nonlinear dynamics of each sub-system. Additionally, the computational analysis confirms that the proposed extension reduces the arithmetic complexity from 𝒪(N3) to 𝒪(Nd2), where d is the number of support vectors. These findings indicate that NObSP is a robust geometric framework for interpreting non-linear dynamic models, offering a scalable solution that successfully decouples blended dynamics without sacrificing the predictive power of the black-box model.
- Front Matter
- 10.3389/frai.2025.1760127
- Dec 19, 2025
- Frontiers in Artificial Intelligence
- Antonio Sarasa-Cabezuelo + 4 more
- Research Article
- 10.3389/frai.2025.1675132
- Dec 17, 2025
- Frontiers in Artificial Intelligence
- Iván Ortiz-Garcés + 2 more
The rapid adoption of Internet of Things (IoT) devices in cyber-physical systems introduces significant security challenges, particularly in distributed and heterogeneous environments where operational resilience and real-time threat response are critical. Previous efforts have explored lightweight encryption and modular authentication. Still, few solutions provide a unified framework that integrates real-time anomaly detection, automated mitigation, and performance evaluation under hybrid experimental conditions. This work presents an autonomous multi-layered security architecture for IoT networks, implemented through microservices-based middleware with native support for detection and adaptive response mechanisms. The architecture integrates lightweight anomaly inference models, based on entropy metrics and anomaly scores, with a rule-based engine that executes dynamic containment actions such as node isolation, channel reconfiguration, and key rotation. The system runs on edge hardware (Raspberry Pi, sensors, actuators) and is validated in a hybrid testbed with NS-3 simulations. Experimental results show an F1-Score of 0.931 in physical deployments and 0.912 in simulated scenarios, with anomaly detection latencies below 130 ms and containment actions triggered within 300 ms. Under high-load conditions, CPU usage remains under 60 % and memory consumption below 300 MB. Compared to representative middleware platforms such as BlendSM-DDM and Claimsware, the proposed system uniquely integrates detection, response, and auditability, achieving high scalability and resilience for IoT deployments in real-world hybrid environments.
- Research Article
- 10.3389/frai.2025.1731062
- Dec 17, 2025
- Frontiers in Artificial Intelligence
- Fernando García-Gutiérrez + 2 more
IntroductionAlzheimer's disease (AD) is characterized by significant variability in clinical progression; however, few studies have focused on developing models to predict cognitive decline. Anticipating these trajectories is essential for patient management, care planning, and developing new treatments. This study explores the potential of artificial intelligence (AI) techniques to model neurocognitive trajectories from multimodal neuroimaging data and further investigates different data representation frameworks.MethodsUsing information from 653 participants from the Alzheimer's Disease Neuroimaging Initiative (ADNI), we developed models to predict future clinical diagnoses and cognitive decline, both quantitatively (rate of decline) and qualitatively (presence or absence of decline). Input features included structural T1-weighted magnetic resonance imaging (MRI), [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET), [18F]-florbetapir PET (AV45-PET), neuropsychological assessments, and demographic variables. Several information representation strategies were explored, including tabular data models, convolutional neural networks (CNNs), and graph neural networks (GNNs). Furthermore, to maximize the use of all available information, we proposed a modeling framework that performed modality-specific pre-training to learn feature embeddings, which were then integrated through a late-fusion layer to produce a unified representation for downstream prediction.ResultsThe modeling strategies demonstrated good predictive performance for future clinical diagnoses, consistent with previous studies (F1 = 0.779). Quantitative models explained approximately 29.4%–36.0% of the variance in cognitive decline. In the qualitative analysis, the models achieved AUC values above 0.83 when predicting cognitive deterioration in the memory, language, and executive function domains. Architecturally, CNN- and GNN-based models yielded the best performance, and the proposed pre-training strategy consistently improved predictive accuracy.ConclusionsThis study demonstrates that AI techniques can capture patterns of cognitive decline by exploiting multimodal neuroimaging data. These findings contribute to the development of more precise phenotyping approaches for neurodegenerative patterns in AD.
- Research Article
- 10.3389/frai.2025.1696859
- Dec 17, 2025
- Frontiers in Artificial Intelligence
- Chirag Jitendra Chandnani + 3 more
Rigorous urbanization leads to unprecedented climate change. Pune area in India has witnessed recent flash floods and landslides due to unplanned rapid urbanization. It, therefore, becomes vital to manage and analyse man-made impact on the environment through effective land use land cover classification (LULC). Accurate LULC classification allows for better planning and effective allocation of resources in urban development. Remote sensing images provide surface reflectance data that are used for accurate mapping and monitoring of land cover. Convolution neural networks (CNN) trained with Relu are conventionally used in classifying different land types. However, every neuron has a single hyperplane decision boundary which restricts the model's capability to generalize. Oscillatory activation functions with their periodic nature have demonstrated that a single neuron can have multiple hyperplanes in the decision boundary which helps in better generalization and accuracy. This study proposes a novel framework with convoluted oscillatory neural networks (CONN) that synergistically combines the periodic, non-monotonic nature of oscillatory activation functions with the deep convoluted architecture of CNNs to accurately map LULC. Results carried out on LANDSAT-8 surface reflectance images for the Pune area indicate that CONN with Decaying Sine Unit achieved an overall train accuracy of 99.999%, test accuracy of 95.979% and outperforms conventional CNN models in precision, recall and User's Accuracy. A thorough ablation study was conducted with various subsets of the feature set to test the performance of the selected models in the absence of data.