Abstract
Recently in diagnosis of Aortic dissection (AD), the synthesis of contrast enhanced CT (CE-CT) images from non-contrast CT (NC-CT) images is an important topic. Existing methods have achieved some results but are unable to synthesize a continuous and clear intimal flap on NC-CT images. In this paper, we propose a multi-stage cascade generative adversarial network (MCGAN) to explicitly capture the features of the intimal flap for a better synthesis of aortic dissection images. For the intimal flap with variable shapes and more detailed features, we extract features in two ways: dense residual attention blocks (DRAB) are integrated to extract shallow features and UNet is employed to extract deep features; then deep features and shallow features are cascaded and fused. For incomplete flaps or lack of details, we use spatial attention and channel attention to extract key features and locations. At the same time, multi-scale fusion is used to ensure the continuity of the intimal flap. We perform the experiment on a set of 124 patients (62 with AD and 62 without AD). The evaluation results show that the synthesized images have the same characteristics as the real images and achieves better results than the popular methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.