Abstract

White matter hyperintensities (WMH) are a feature of sporadic small vessel disease also frequently observed in magnetic resonance images (MRI) of healthy elderly subjects. The accurate assessment of WMH burden is of crucial importance for epidemiological studies to determine association between WMHs, cognitive and clinical data; their causes, and the effects of new treatments in randomized trials. The manual delineation of WMHs is a very tedious, costly and time consuming process, that needs to be carried out by an expert annotator (e.g. a trained image analyst or radiologist). The problem of WMH delineation is further complicated by the fact that other pathological features (i.e. stroke lesions) often also appear as hyperintense regions. Recently, several automated methods aiming to tackle the challenges of WMH segmentation have been proposed. Most of these methods have been specifically developed to segment WMH in MRI but cannot differentiate between WMHs and strokes. Other methods, capable of distinguishing between different pathologies in brain MRI, are not designed with simultaneous WMH and stroke segmentation in mind. Therefore, a task specific, reliable, fully automated method that can segment and differentiate between these two pathological manifestations on MRI has not yet been fully identified. In this work we propose to use a convolutional neural network (CNN) that is able to segment hyperintensities and differentiate between WMHs and stroke lesions. Specifically, we aim to distinguish between WMH pathologies from those caused by stroke lesions due to either cortical, large or small subcortical infarcts. The proposed fully convolutional CNN architecture, called uResNet, that comprised an analysis path, that gradually learns low and high level features, followed by a synthesis path, that gradually combines and up-samples the low and high level features into a class likelihood semantic segmentation. Quantitatively, the proposed CNN architecture is shown to outperform other well established and state-of-the-art algorithms in terms of overlap with manual expert annotations. Clinically, the extracted WMH volumes were found to correlate better with the Fazekas visual rating score than competing methods or the expert-annotated volumes. Additionally, a comparison of the associations found between clinical risk-factors and the WMH volumes generated by the proposed method, was found to be in line with the associations found with the expert-annotated volumes.

Highlights

  • In the following we review existing methods and challenges that are related to our work, especially on Multiple sclerosis (MS), White matter hyperintensities (WMH) and stroke lesion segmentation in magnetic resonance (MR) imaging

  • Some of the methods mentioned here were proposed for segmenting different pathologies rather than the ones we explore in this work, they can be applied to different tasks

  • In this work we have proposed a convolutional neural network (CNN) framework, uResNet, for the segmentation of WMHs that is capable of distinguishing between WMHs arising from different pathologies, mainly WMHs of presumed VD origin and those from stroke lesions

Read more

Summary

Introduction

One of the most widely used metrics to assess WMH burden and severity is the Fazekas visual rating scale (i.e. score) (Fazekas et al, 1987) In this scale, a radiologist visually rates deep white matter and peri-ventricular areas of a MR scan into four possible categories each depending on the size, location and confluence of lesions. Despite the number of proposed methods, no automated solution is currently widely used in clinical practice and only a few of them are publicly available (Shiee et al, 2010a; Damangir et al, 2012; Schmidt et al, 2012) This is partly because lesion load, as defined in most previously proposed automatic WMH segmentation algorithms, does not take into account the contribution of strokes lesion, as these methods are generally unable to differentiate between these two types of lesions

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call