Abstract

Medical image segmentation can provide doctors with more direct information on the location and size of organs or lesions, which can serve as an valuable auxiliary task for prostate cancer grading. Meanwhile, other types of diagnostic data besides images are also essential, such as patient age, Prostate-Specific Antigen (PSA), etc. Currently, there is a lack of in-depth research on how to effectively differentiate and select shared features and task-specific features in multitask learning, as well as how to balance and explore the potential correlations between different tasks. In this paper, we propose a novel Shared Feature Hybrid Gating Experts (SFHGE) architecture for collaborative main (lesion grading) and auxiliary (lesion segmentation) task learning, dynamically selecting shared and task-specific features. To efficiently utilize complementary features, we also introduce a Cross-Task Attention module (CrossTA) to capture cross-task integrated representation. Additionally, recognizing that non-image clinical information often provides crucial diagnostic insights, we further design a Heterogeneous Information Fusion Network (HIFN) to better integrate clinical data, thereby improving grading performance. Extensive experiments on the PI-CAI dataset demonstrate that our approach outperforms mainstream classification and segmentation models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call