Abstract

Synthesising computed tomography (CT) images from magnetic resonance images (MRI) plays an important role in the field of medical image analysis, both for quantification and diagnostic purposes. Convolutional neural networks (CNNs) have achieved state-of-the-art results in image-to-image translation for brain applications. However, synthesising whole-body images remains largely uncharted territory involving many challenges, including large image size and limited field of view, complex spatial context, and anatomical differences between images acquired at different times. We propose the use of an uncertainty-aware multi-channel multi-resolution 3D cascade network specifically aiming for whole-body MR to CT synthesis. The Mean Absolute Error on the synthetic CT generated with the MultiResunc network (73.90 HU) is compared to multiple baseline CNNs like 3D U-Net (92.89 HU), HighRes3DNet (89.05 HU) and deep boosted regression (77.58 HU) and shows superior synthesis performance. We ultimately exploit the extrapolation properties of the MultiRes networks on sub-regions of the body.

Highlights

  • Simultaneous positron emission tomography and magnetic resonance imaging (PET/magnetic resonance images (MRI)) is an important tool in both clinical and research applications, allowing a multiparametric evaluation of the subject

  • A multi-center study on brain images has shown that obtaining tissue attenuation coefficients from synthesised computed tomography (CT) images leads to state-of-the-art results for positron emission tomography and magnetic resonance imaging (PET/MRI) attenuation correction (AC) [1]

  • We propose the use of a deep learning framework for multi-resolution image translation designed for whole-body MR to CT synthesis (MultiRes)

Read more

Summary

Introduction

Simultaneous positron emission tomography and magnetic resonance imaging (PET/MRI) is an important tool in both clinical and research applications, allowing a multiparametric evaluation of the subject. In 2019, Ge et al [4] attempted to translate whole-body MR images to CT images by introducing a multi-view adversarial learning scheme that predicts 2D pCT images along three axes (axial, coronal, sagittal). They obtain 3D volumes for each axis by stacking 2D slices together, followed by an average fusion that results in the final 3D volume. In the field of medical imaging, multi-resolution learning has been utilised to solve image classification [8], super-resolution [9] and segmentation [10] tasks These methods learn strong features at multiple levels of scale and abstraction, finding the input/output voxel correspondence based on these features. We extend the framework to allow for multi-channel inputs and add additional validation on a brain dataset showing state-of-the-art synthesis results

Methods
Modelling Heteroscedastic Uncertainty
Modelling Epistemic Uncertainty
Implementation Details
Experiments
Quantitative Evaluation
Qualitative Evaluation
Result
Discussion and Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call