Abstract

Segmentation of focal liver lesions serves as an essential preprocessing step for initial diagnosis, stage differentiation, and post-treatment efficacy evaluation. Multimodal MRI scans (e.g., T1WI, T2WI) provide complementary information on liver lesions and is widely used for diagnosis. However, some modalities (e.g., T1WI) have high resolution but lack of important visual information (e.g., edge) belonged to other modalities (T2WI), it is significant to enhance tissue lesion quality in T1WI using other modality priors (T2WI) and improve segmentation performance. In this paper, we propose a graph learning based approach with the motivation of extracting modality-specific features efficiently and establishing the regional correspondence effectively between T1WI and T2WI. We first project deep features into a graph domain and employ graph convolution to propagate information across all regions for extraction of modality-specific features. Then we propose a mutual information based graph co-attention module to learn weight coefficients of one bipartite graph, which is constructed by the fully-connection of graphs with different modalities in the graph domain. At last, we get the final refined features for segmentation by re-projection and residual connection. We validate our method on a multimodal MRI liver lesion dataset. Experimental results show that the proposed approach achieves improvement of liver lesion segmentation in T1WI by learning guided features from multimodal priors (T2WI) compared to existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call