Abstract

In order to facilitate the robust and precise 3D vessel shape extraction and quantification from in-vivo Magnetic Resonance Imaging (MRI), this paper presents a novel multi-scale Knowledge Transfer Vision Transformer (i.e., KT-ViT) for 3D vessel shape segmentation. First, it uniquely integrates convolutional embeddings with transformer in a U-net architecture, which simultaneously responds to local receptive fields with convolution layers and global contexts with transformer encoders in a multi-scale fashion. Therefore, it intrinsically enriches local vessel feature and simultaneously promotes global connectivity and continuity for a more accurate and reliable vessel shape segmentation. Furthermore, to enable using relatively low-resolution (LR) images to segment fine scale vessel shapes, a novel knowledge transfer network is designed to explore the inter-dependencies of data and automatically transfer the knowledge gained from high-resolution (HR) data to the low-resolution handling network at multiple levels, including the multi-scale feature levels and the decision level, through an integration of multi-level loss functions. The modeling capability of fine-scale vessel shape data distribution, possessed by the HR image transformer network, can be transferred to the LR image transformer to enhance its knowledge for fine vessel shape segmentation. Extensive experimental results on public image datasets have demonstrated that our method outperforms all other state-of-the-art deep learning methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call