Abstract

Deep learning is crucial for preliminary screening and diagnostic assistance based on medical image analysis. However, limited annotated data and complex anatomical structures challenge existing models as they struggle to capture anatomical context information effectively. In response, we propose a novel self-supervised multi-task learning framework (SSMT), which integrates two key modules: a discriminative-based module and a generative-based module. These modules collaborate through multiple proxy tasks, encouraging models to learn global discriminative representations and local fine-grained representations. Additionally, we introduce an efficient uniformity regularization to further enhance the learned representations. To demonstrate the effectiveness of SSMT, we conduct extensive experiments on six public Chest X-ray image datasets. Our results highlight that SSMT not only outperforms existing state-of-the-art methods but also achieves comparable performance to the supervised model in challenging downstream tasks. The ablation study demonstrates collaboration between the key components of SSMT, showcasing its potential for advancing medical image analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call