- Research Article
- 10.3389/fninf.2025.1649440
- Dec 17, 2025
- Frontiers in Neuroinformatics
- Marco Ganzetti + 6 more
BackgroundSpinal cord atrophy is a key biomarker for tracking disease progression in neurological disorders, including multiple sclerosis, amyotrophic lateral sclerosis, and spinal cord injury. Recent MRI advancements have improved atrophy detection, particularly in the cervical region, facilitating longitudinal studies. However, validating atrophy quantification algorithms remains challenging due to limited ground truth data.ObjectiveThis study introduces SynSpine, a workflow for generating synthetic spinal cord MRI data (i.e., digital phantoms) with controlled levels of artificial atrophy. These phantoms support the development and preliminary validation of spinal cord imaging pipelines designed to measure degeneration over time.MethodsThe workflow consists of two phases: (1) generating synthetic MR images by isolating, extracting and scaling the spinal cord, simulating atrophy on the PAM50 template; (2) performing non-rigid registration to align the synthetic images with the subject’s native space, ensuring accurate anatomical correspondence. A proof-of-concept application utilizing the Active Surface and Reg methods implemented in Jim demonstrated its effectiveness in detecting atrophy across various levels of simulated atrophy and noise.ResultsSynSpine successfully generates synthetic spinal cord images with varying atrophy levels. Non-rigid registration did not significantly affect atrophy measurements. Atrophy estimation errors, estimated using Active Surface and Reg methods, varied with both simulated atrophy magnitude and noise level, exhibiting region-dependent differences. Increased noise led to higher measurement errors.ConclusionThis work presents a novel and modular framework for simulating spinal cord atrophy data using digital phantoms, offering a controlled setting for testing spinal cord analysis pipelines. As the simulated atrophy may over-simplify in vivo conditions, future research will focus on enhancing the realism of the synthetic dataset by simulating additional pathologies, thus improving its application for evaluating spinal cord atrophy in clinical and research contexts.
- Research Article
1
- 10.3389/fninf.2025.1679664
- Dec 5, 2025
- Frontiers in Neuroinformatics
- Karol Chlasta + 2 more
Dementia poses a major challenge to individuals and public health systems. Detecting cognitive decline through spontaneous speech offers a promising, non-invasive avenue for diagnosis of mild cognitive impairment (MCI) and dementia, enabling timely intervention and improved outcomes. This study describes our submission to the PROCESS Signal Processing Grand Challenge (ICASSP 2025), which tasked participants with predicting cognitive decline from speech samples. Our method combines eGeMAPS features from openSMILE, HuBERT (a self-supervised speech representation model), and GPT-4o, OpenAI's state-of-the-art large language model. These are integrated with the custom LSTM and ResMLP neural networks, and supported by Scikit-learn regressors/classifiers for both cognitive score regression and dementia classification. Our regression model based on LightGBM achieved an RMSE of 2.7775, placing us 10th out of 80 teams globally and surpassing the RoBERTa baseline by 7.5%. For the three-class classification task (Dementia/MCI/Control), our LSTM model obtained an F1-score of 0.5521, ranking 20th of 106 and marginally outperforming the best baseline. We trained models on speech data from 157 study participants, with independent evaluation performed on a separate test set of 40 individuals. We discoved that integrating large language models with self-supervised speech representations enhances the detection of cognitive decline. The proposed approach offers a scalable, data-driven method for early cognitive screening and may support emerging applications in neuropsychological informatics.
- Research Article
- 10.3389/fninf.2025.1679196
- Nov 19, 2025
- Frontiers in Neuroinformatics
- J Revathy + 1 more
IntroductionAutism Spectrum Disorder (ASD) diagnosis remains complex due to limited access to large-scale multimodal datasets and privacy concerns surrounding clinical data. Traditional methods rely heavily on resource-intensive clinical assessments and are constrained by unimodal or non-adaptive learning models. To address these limitations, this study introduces AutismSynthGen, a privacy-preserving framework for synthesizing multimodal ASD data and enhancing prediction accuracy.Materials and methodsThe proposed system integrates a Multimodal Autism Data Synthesis Network (MADSN), which employs transformer-based encoders and cross-modal attention within a conditional GAN to generate synthetic data across structural MRI, EEG, behavioral vectors, and severity scores. Differential privacy is enforced via DP-SGD (ε ≤ 1.0). A complementary Adaptive Multimodal Ensemble Learning (AMEL) module, consisting of five heterogeneous experts and a gating network, is trained on both real and synthetic data. Evaluation is conducted on the ABIDE, NDAR, and SSC datasets using metrics such as AUC, F1 score, MMD, KS statistic, and BLEU.ResultsSynthetic augmentation improved model performance, yielding validation AUC gains of ≥ 0.04. AMEL achieved an AUC of 0.98 and an F1 score of 0.99 on real data and approached near-perfect internal performance (AUC ≈ 1.00, F1 ≈ 1.00) when synthetic data were included. Distributional metrics (MMD = 0.04; KS = 0.03) and text similarity (BLEU = 0.70) demonstrated high fidelity between the real and synthetic samples. Ablation studies confirmed the importance of cross-modal attention and entropy-regularized expert gating.DiscussionAutismSynthGen offers a scalable, privacy-compliant solution for augmenting limited multimodal datasets and enhancing ASD prediction. Future directions include semi-supervised learning, explainable AI for clinical trust, and deployment in federated environments to broaden accessibility while maintaining privacy.
- Front Matter
- 10.3389/fninf.2025.1724386
- Nov 12, 2025
- Frontiers in Neuroinformatics
- Rositsa Paunova + 1 more
- Research Article
- 10.3389/fninf.2025.1700481
- Oct 30, 2025
- Frontiers in Neuroinformatics
- Erik D Fagerholm + 2 more
IntroductionNeural activity can be described in terms of probability distributions that are continuously evolving in time. Characterizing how these distributions are reshaped as they pass between cortical regions is key to understanding how information is organized in the brain.MethodsWe developed a mathematical framework that represents these transformations as information-theoretic gradient flows — dynamical trajectories that follow the steepest ascent of entropy and expectation. The relative strengths of these two functionals provide interpretable measures of how neural probability distributions change as they propagate within neural systems. Following construct validation in silico, we applied the framework to publicly available continuous ΔF/F two-photon calcium recordings from the mouse visual cortex.ResultsThe analysis revealed consistent bi-directional transformations between the rostrolateral area and the primary visual cortex across all five mice. These findings demonstrate that the relative contributions of entropy and expectation can be disambiguated and used to describe information flow within cortical networks.DiscussionWe introduce a framework for decomposing neural signal transformations into interpretable information-theoretic components. Beyond the mouse visual cortex, the method can be applied to diverse neuroimaging modalities and scales, thereby providing a generalizable approach for quantifying how information geometry shapes cortical communication.
- Research Article
3
- 10.3389/fninf.2025.1647194
- Oct 24, 2025
- Frontiers in Neuroinformatics
- Chuanbo Hu + 7 more
IntroductionDiagnosing Autism Spectrum Disorder (ASD) in verbally fluent individuals based on speech patterns in examiner-patient dialogues is challenging because speech-related symptoms are often subtle and heterogeneous. This study aimed to identify distinctive speech characteristics associated with ASD by analyzing recorded dialogues from the Autism Diagnostic Observation Schedule (ADOS-2).MethodsWe analyzed examiner-participant dialogues from ADOS-2 Module 4 and extracted 40 speech-related features categorized into intonation, volume, rate, pauses, spectral characteristics, chroma, and duration. These acoustic and prosodic features were processed using advanced speech analysis tools and used to train machine learning models to classify ASD participants into two subgroups: those with and without A2-defined speech pattern abnormalities. Model performance was evaluated using cross-validation and standard classification metrics.ResultsUsing all 40 features, the support vector machine (SVM) achieved an F1-score of 84.49%. After removing Mel-Frequency Cepstral Coefficients (MFCC) and Chroma features to focus on prosodic, rhythmic, energy, and selected spectral features aligned with ADOS-2 A2 scores, performance improved, achieving 85.77% accuracy and an F1-score of 86.27%. Spectral spread and spectral centroid emerged as key features in the reduced set, while MFCC 6 and Chroma 4 also contributed significantly in the full feature set.DiscussionThese findings demonstrate that a compact, diverse set of non-MFCC and selected spectral features effectively characterizes speech abnormalities in verbally fluent individuals with ASD. The approach highlights the potential of context-aware, data-driven models to complement clinical assessments and enhance understanding of speech-related manifestations in ASD.
- Research Article
- 10.3389/fninf.2025.1655003
- Oct 1, 2025
- Frontiers in Neuroinformatics
- D Prabha Devi + 1 more
IntroductionHeart disease is one of the leading causes of mortality worldwide, and early detection is crucial for effective treatment. Phonocardiogram (PCG) signals have shown potential in diagnosing cardiovascular conditions. However, accurate classification of PCG signals remains challenging due to high dimensional features, leading to misclassification and reduced performance in conventional systems.MethodsTo address these challenges, we propose a Linear Vectored Particle Swarm Optimization (LV-PSO) integrated with a Fuzzy Inference Xception Convolutional Neural Network (XCNN) for early heart risk prediction. PC G signals are analyzed to extract variations such as delta, theta, diastolic, and systolic differences. A Support Scalar Cardiac Impact Rate (S2CIR) is employed to capture disease specific scalar variations and behavioral impacts. LV-PSO is used to reduce feature dimensionality, and the optimized features are subsequently trained using the Fuzzy Inference XCNN model to classify disease types.ResultsExperimental evaluation demonstrates that the proposed system achieves superior predictive performance compared to existing models. The method attained a precision of 95.6%, recall of 93.1%, and an overall prediction accuracy of 95.8% across multiple disease categories.DiscussionThe integration of LV-PSO with Fuzzy Inference XCNN enhances feature selection aPSO with Fuzzy Inference XCNN enhances feature selection and nd classification accuracy, significantly improving the diagnostic capabilities of PCG-classification accuracy, significantly improving the diagnostic capabilities of PCG-based systems. These results highlight the potential of the proposed framework as a based systems. These results highlight the potential of the proposed framework as a reliable tool for early heart disease prediction and clinical decision support.reliable tool for early heart disease prediction and clinical decision support.
- Supplementary Content
- 10.3389/fninf.2025.1630133
- Sep 29, 2025
- Frontiers in Neuroinformatics
- Paul Nazac + 4 more
In recent years, advances in microscopy and the development of novel fluorescent probes have significantly improved neuronal imaging. Many neuropsychiatric disorders are characterized by alterations in neuronal arborization, neuronal loss—as seen in Parkinson’s disease—or synaptic loss, as in Alzheimer’s disease. Neurodevelopmental disorders can also impact dendritic spine morphogenesis, as observed in autism spectrum disorders and schizophrenia. In this review, we provide an overview of the various labeling and microscopy techniques available to visualize neuronal structure, including dendritic spines and synapses. Particular attention is given to available fluorescent probes, recent technological advances in super-resolution microscopy (SIM, STED, STORM, MINFLUX), and segmentation methods. Aimed at biologists, this review presents both classical segmentation approaches and recent tools based on deep learning methods, with the goal of remaining accessible to readers without programming expertise.
- Research Article
- 10.3389/fninf.2025.1629388
- Sep 24, 2025
- Frontiers in Neuroinformatics
- Maja A Puchades + 6 more
Advancements in methodologies for efficient large-scale acquisition of high-resolution serial microscopy image data have opened new possibilities for experimental studies of cellular and subcellular features across whole brains in animal models. There is a high demand for open-source software and workflows for automated or semi-automated analysis of such data, facilitating anatomical, functional, and molecular mapping in healthy and diseased brains. These studies share a common need to consistently identify, visualize, and quantify the location of observations within anatomically defined regions, ensuring reproducible interpretation of anatomical locations, and thereby allowing meaningful comparisons of results across multiple independent studies. Addressing this need, we have developed a suite of desktop and web-applications for registration of serial brain section images to three-dimensional brain reference atlases (QuickNII, VisuAlign, WebAlign, WebWarp, and DeepSlice) and for performing data analysis in a spatial context provided by an atlas (Nutil, QCAlign, SeriesZoom, LocaliZoom, and MeshView). The software can be utilized in various combinations, creating customized analytical pipelines suited to specific research needs. The web-applications are integrated in the EBRAINS research infrastructure and coupled to the EBRAINS data platform, establishing the foundation for an online analytical workbench. We here present our software ecosystem, exemplify its use by the research community, and discuss possible directions for future developments.
- Research Article
- 10.3389/fninf.2025.1553035
- Sep 11, 2025
- Frontiers in Neuroinformatics
- Gopikrishna Deshpande + 3 more
In large public multi-site fMRI datasets, the sample characteristics, data acquisition methods, and MRI scanner models vary across sites and datasets. This non-neural variability obscures neural differences between groups and leads to poor machine learning based diagnostic classification of neurodevelopmental conditions. This could be potentially addressed by domain adaptation, which aims to improve classification performance in a given target domain by utilizing the knowledge learned from a different source domain by making data distributions of the two domains as similar as possible. In order to demonstrate the utility of domain adaptation for multi-site fMRI data, this research developed a variational autoencoder—maximum mean discrepancy (VAE-MMD) deep learning model for three-way diagnostic classification: (i) Autism, (ii) Asperger's syndrome, and (iii) typically developing controls. This study chooses ABIDE-II (Autism Brain Imaging Data Exchange) dataset as the target domain and ABIDE-I as the source domain. The results show that domain adaptation from ABIDE-I to ABIDE-II provides superior test accuracy of ABIDE-II compared to just using ABIDE-II for classification. Further, augmenting the source domain with additional healthy control subjects from Healthy Brain Network (HBN) and Amsterdam Open MRI Collection (AOMIC) datasets enables transfer learning and improves ABIDE-II classification performance. Finally, a comparison with statistical data harmonization techniques, such as ComBat, reveals that domain adaptation using VAE-MMD achieves comparable performance, and incorporating transfer learning (TL) with additional healthy control data substantially improves classification accuracy beyond that achieved by statistical methods (such as ComBat) alone. The dataset and the model used in this study are publicly available. The neuroimaging community can explore the possibility of further improving the model by utilizing the ever-increasing amount of healthy control fMRI data in the public domain.