Abstract

Multimodal imaging techniques have received a great deal of attention, since their inceptions for achieving an enhanced imaging performance. In this paper, a novel joint reconstruction framework for computed tomography (CT) and magnetic resonance imaging (MRI) is implemented and evaluated. The CT and MRI data sets are synchronously acquired and registered from a hybrid CT-MRI platform. Because the image data sets are highly undersampled, the conventional methods (e.g., analytic reconstructions) are unable to generate decent results. To overcome this drawback, we employ the compressed sensing (CS) sparse priors from an application of discrete gradient transform. On the other hand, to utilize multimodal imaging information, the concept of projection distance is introduced to penalize the large divergence between images from different modalities. During the optimization process, CT and MRI images are alternately updated using the latest information from current iteration. The method exploits the structural similarities between the CT and MRI images to achieve better reconstruction quality. The entire framework is accelerated via the parallel processing techniques implemented on a nVidia M5000M Graph Processing Unit. This results in a significant decrease of the computational time (from hours to minutes). The performance of the proposed approach is demonstrated on a pair of undersampled projections CT and MRI body images. For comparison, the CT and MRI images are also reconstructed by an analytic method, and iterative methods with no exploration of structural similarity, known as independent reconstructions. Results show that the proposed joint reconstruction provides a better image quality than both analytic methods and independent reconstruction by revealing the main features of the true images. It is concluded that the structural similarities and correlations residing in images from different modalities are useful to mutually promote the quality of joint image reconstruction.

Highlights

  • The multimodal imaging techniques have been widely hailed as powerful tools to assist the modern clinical decisionmaking process

  • The joint reconstruction framework is tested on a pair of body images, including a computed tomography (CT) image and an magnetic resonance imaging (MRI) image

  • The undersampled MRI k-space data are collected by applying an undersampling mask to the fully sampled k-space data, which are generated by applying the Fourier transform to the original image

Read more

Summary

INTRODUCTION

The multimodal imaging techniques have been widely hailed as powerful tools to assist the modern clinical decisionmaking process. It is reasonable to add TV as a regularization, since in general the gradient-based sparse prior is very important in CS-based medical image reconstruction to suppress the streaks and to reduce noise From another perspective, since a multimodal platform samples the same object simultaneously, the images from different modalities must share structural similarities in boundaries and edges. This research is part of the planned omni-tomography project, and works on the portion of image reconstruction It requires the development of a new joint CT-MRI reconstruction from highly undersampled CT and MRI data sets. The proposed sparse-prior-based projection distance optimization method updates image signals at each iteration while it incorporates information provided by the the CT and MRI coupling.

THEORY AND METHODS
MULTIMODAL COUPLING
RESULTS AND DISCUSSION
CONCLUSIONS

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.