Abstract

The neuroimage analysis community has neglected the automated segmentation of the olfactory bulb (OB) despite its crucial role in olfactory function. The lack of an automatic processing method for the OB can be explained by its challenging properties (small size, location, and poor visibility on traditional MRI scans). Nonetheless, recent advances in MRI acquisition techniques and resolution have allowed raters to generate more reliable manual annotations. Furthermore, the high accuracy of deep learning methods for solving semantic segmentation problems provides us with an option to reliably assess even small structures. In this work, we introduce a novel, fast, and fully automated deep learning pipeline to accurately segment OB tissue on sub-millimeter T2-weighted (T2w) whole-brain MR images. To this end, we designed a three-stage pipeline: (1) Localization of a region containing both OBs using FastSurferCNN, (2) Segmentation of OB tissue within the localized region through four independent AttFastSurferCNN - a novel deep learning architecture with a self-attention mechanism to improve modeling of contextual information, and (3) Ensemble of the predicted label maps. For this work, both OBs were manually annotated in a total of 620 T2w images for training (n=357) and testing. The OB pipeline exhibits high performance in terms of boundary delineation, OB localization, and volume estimation across a wide range of ages in 203 participants of the Rhineland Study (Dice Score (Dice): 0.852, Volume Similarity (VS): 0.910, and Average Hausdorff Distance (AVD): 0.215 mm). Moreover, it also generalizes to scans of an independent dataset never encountered during training, the Human Connectome Project (HCP), with different acquisition parameters and demographics, evaluated in 30 cases at the native 0.7 mm HCP resolution (Dice: 0.738, VS: 0.790, and AVD: 0.340 mm), and the default 0.8 mm pipeline resolution (Dice: 0.782, VS: 0.858, and AVD: 0.268 mm). We extensively validated our pipeline not only with respect to segmentation accuracy but also to known OB volume effects, where it can sensitively replicate age effects (β=−0.232, p<.01). Furthermore, our method can analyze a 3D volume in less than a minute (GPU) in an end-to-end fashion, providing a validated, efficient, and scalable solution for automatically assessing OB volumes.

Highlights

  • As the olfactory bulb (OB) volumes are usually the desired marker for downstream analysis, we computed a volume-based metric, the volume similarity (VS) [64], defined as metrics aimed at evaluating different properties: spatial overlap, spatial distance, and volume similarity

  • Assessed the spatial overlap as it provides both size and lo- While Volume Similarity (VS) is similar to Dice, it does not take into account calization consensus by computing the Dice similarity coef- segmentations overlap and can have its maximum value ficient (Dice), which is a common metric used for validat- even when the overlap is zero

  • Let G (ground used for the localization marker and replaced with localization distance (R), a metric more suitable to assess the of merging four AttFastSurferCNN with different trainingaccuracy of the centroid coordinate created in this stage. data conditions. (E3) We assessed the sensitivity of the

Read more

Summary

Introduction

To put the AttFastSurferCNN results into conseen test set was evaluated by computing three similarity text, we compared the performance against the inter and metrics (Dice, AVD, and VS) between the predicted maps intra-rater variability scores obtained in the manual annoand manuals labels. It is important to observed that there is hemisphere asymmetry note that we did not compute overlap segmentation perforwhere the maximum predicted volume by any hemisphere mance metrics (Dice and AVD) across different sequence was 8.7 mm3 translating in a detection of only 17 vox- label maps of the same subject as this would require regisels.

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call