Abstract

Traditional magnetic resonance imaging (MRI) acquires three contrasts of $T_{1}$ , $T_{2}$ , and proton density (PD), but only one contrast can be highlighted in an imaging process, which not only restricts the reference standard for disease but also increases the discomfort and medical expenses of the patients due to requiring two different weighted MRI. In order to solve such a problem, we proposed a method based on deep learning technology to provide two MRI contrasts after one signal acquisition. In this paper, a new model (PTGAN) based on generative adversarial networks is devised to convert $T_{2}$ -weighted MRI images into PD-weighted MRI images. In addition, we have devised four different network structures as the reference model of PTGAN, by which the different brain dissection MRI images, different noise MRI images, knee cartilage MRI images, and pathological MRI images from different body parts are used to test PTGAN. The research results show that the proposed PTGAN can effectively preserve the structure and texture and improve resolution in the conversion. Moreover, each $T_{2}$ -weighted MRI conversion takes only about 4 ms and can provide more information for disease diagnosis through different image contrasts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call