Abstract

The synthesis of medical images from one modality to another is an intensity transformation between two images acquired from different medical devices, such as Magnetic Resonance (MR)image synthesis to Computed Tomography (CT)image, or MR T1 weighted (T1W)image to T2 weighted (T2W)or proton density weighted (PDW)image. MR based synthetic CT is very useful for some clinical cases, such as PET attenuation correction for PET/MR, MR/CT registration etc. In this paper, we propose a novel method based on fully convolutional networks (FCN)to generate synthetic CT image from MR image. We adopt a u-net like structure FCN model for image regression from MR image to CT image. To achieve better result, there are some key steps proposed. Firstly, the preprocessing step, normalize MR and CT image intensity using mean value of brain tissue. Secondly, Propose the tissue focused loss function for convolutional neural networks (CNN)regression. Thirdly, we adopt three orthogonal planner convolution instead 3D convolution to avoid huge computation however keep 3D structure information. Fifteen brain MR and CT data pairs are analyzed. Experiment results show that our method can accurately synthesize CT images in various scenarios, even for the images with large rotation or lesion, and it is also extended to get accurate results between different MR sequence images synthesis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call