Abstract

We introduce how to design supervised deep learning models to address medical image reconstruction problems. We note that medical image reconstruction is the process of transforming the sensor domain data (measured by imaging devices) to the image domain data (that is interpretable for healthcare professionals). Therefore, we show how to develop sensor domain approaches, image domain approaches, and dual-domain approaches to improve the medical image reconstruction performance. For image domain approaches, we first show that deep neural networks can be directly applied to the image domain data in a postprocessing step to reduce the artifacts after reconstruction. Then, we provide a case study to demonstrate how to implement an image domain approach for sparse-view artifact reduction. For sensor domain approaches, we show how to leverage deep neural networks to correct the sensor domain data so that the compromised imaging information is addressed in an earlier stage. Although many ideas for image domain approaches could be directly applied to the sensor domain, we note that problems present differently in the sensor domain and highlight the designs that are specific to sensor domain approaches. Finally, for dual-domain approaches, we show that it is feasible to design deep learning models to address the problems from both the image domain and sensor domain. More importantly, such a dual-domain approach can be implemented under a unified framework so that the learning from both domains can be achieved in an end-to-end manner. To elaborate on this dual-domain learning, we also provide a case study where we introduce a novel network called DuDoNet for joint sinogram and image domain metal artifact reduction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call