Abstract

Magnetic resonance imaging (MRI) presents a detailed image of the internal organs via a magnetic field. Given MRI’s non-invasive advantage in repeated imaging, the low-contrast MR images in the target area make segmentation of tissue a challenging problem. This study shows the potential advantages of synthetic high tissue contrast (HTC) images through image-to-image translation techniques. Mainly, we use a novel cycle generative adversarial network (Cycle-GAN), which provides a mechanism of attention to increase the contrast within the tissue. The attention block and training on HTC images are beneficial to our model to enhance tissue visibility. We use a multistage architecture to concentrate on a single tissue as a preliminary and filter out the irrelevant context in every stage in order to increase the resolution of HTC images. The multistage architecture reduces the gap between source and target domains and alleviates synthetic images’ artefacts. We apply our HTC image synthesising method to two public datasets. In order to validate the effectiveness of these images we use HTC MR images in both end-to-end and two-stage segmentation structures. The experiments on three segmentation baselines on BraTS’18 demonstrate that joining the synthetic HTC images in the multimodal segmentation framework develops the average Dice similarity scores (DSCs) of 0.8%, 0.6%, and 0.5% respectively on the whole tumour (WT), tumour core (TC), and enhancing tumour (ET) while removing one real MRI channels from the segmentation pipeline. Moreover, segmentation of infant brain tissue in T1w MR slices through our framework improves DSCs approximately 1% in cerebrospinal fluid (CSF), grey matter (GM), and white matter (WM) compared to state-of-the-art segmentation techniques. The source code of synthesising HTC images is publicly available.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call