Abstract

Achieving biologically interpretable neural-biomarkers and features from neuroimaging datasets is a challenging task in an MRI-based dyslexia study. This challenge becomes more pronounced when the needed MRI datasets are collected from multiple heterogeneous sources with inconsistent scanner settings. This study presents a method of improving the biological interpretation of dyslexia’s neural-biomarkers from MRI datasets sourced from publicly available open databases. The proposed system utilized a modified histogram normalization (MHN) method to improve dyslexia neural-biomarker interpretations by mapping the pixels’ intensities of low-quality input neuroimages to range between the low-intensity region of interest (ROIlow) and high-intensity region of interest (ROIhigh) of the high-quality image. This was achieved after initial image smoothing using the Gaussian filter method with an isotropic kernel of size 4mm. The performance of the proposed smoothing and normalization methods was evaluated based on three image post-processing experiments: ROI segmentation, gray matter (GM) tissues volume estimations, and deep learning (DL) classifications using Computational Anatomy Toolbox (CAT12) and pre-trained models in a MATLAB working environment. The three experiments were preceded by some pre-processing tasks such as image resizing, labelling, patching, and non-rigid registration. Our results showed that the best smoothing was achieved at a scale value, σ = 1.25 with a 0.9% increment in the peak-signal-to-noise ratio (PSNR). Results from the three image post-processing experiments confirmed the efficacy of the proposed methods. Evidence emanating from our analysis showed that using the proposed MHN and Gaussian smoothing methods can improve comparability of image features and neural-biomarkers of dyslexia with a statistically significantly high disc similarity coefficient (DSC) index, low mean square error (MSE), and improved tissue volume estimations. After 10 repeated 10-fold cross-validation, the highest accuracy achieved by DL models is 94.7% at a 95% confidence interval (CI) level. Finally, our finding confirmed that the proposed MHN method significantly outperformed the normalization method of the state-of-the-art histogram matching.

Highlights

  • Magnetic resonance imaging (MRI) has been commonly used as a very simple non-invasive imaging technique to study and examine human brain anatomy in order to explain the neuropathogenic causes of various learning disorders, including dyslexia [1,2,3,4,5]

  • The achievement of state-of-the-art high accuracy, sensitivity, and specificity for machine learning (ML) and deep learning (DL) methods for dyslexia prediction depends largely on biological interpretability of the sub-anatomical structure of different brain tissue features found in the input MRI dataset

  • While the use of a multi-site MRI dataset provides a way of examining a greater number of subjects to develop a unique dyslexia diagnostic cohorts [11], the method is sometimes hindered by noise and high-intensity variations with a significant negative impact on the results of ML classifiers [13]

Read more

Summary

Introduction

Magnetic resonance imaging (MRI) has been commonly used as a very simple non-invasive imaging technique to study and examine human brain anatomy in order to explain the neuropathogenic causes of various learning disorders, including dyslexia [1,2,3,4,5]. The achievement of state-of-the-art high accuracy, sensitivity, and specificity for ML and DL methods for dyslexia prediction depends largely on biological interpretability of the sub-anatomical structure of different brain tissue features found in the input MRI dataset Such features are otherwise referred to as neural-biomarkers and constitute the neuroimaging dataset’s key region of interest (ROI). The majority of ML-based neuroimaging studies for dyslexia neural-biomarkers discrimination require the use of vast amounts of multi-site MRI datasets to evaluate the anatomical variations and alterations in the brains of the study participants Such datasets are scanned using various scanner types with inconsistent parameter settings at different geographical locations within and across subject classes, as seen in the studies by Plonski et al [10,11] and Jednorog et al [12]. The intensity normalization and smoothing aim to correct scannerdependent variations for accurate interpretation of the relevant tissues and neural-biomarkers [8]

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call