Abstract

Accurate segmentation of hepatocellular carcinoma regions is a critical step in surgical evaluation of whole slide pathological images. Recently, emerging multi-magnification learning-based methods have shown promise with evaluation of whole slide images. However, traditional multi-magnification learning segmentation models focus on uniformity and fail to effectively exploit different magnification information. The purpose of this study is to develop a novel multi-magnification learning segmentation model by effectively utilizing diverse magnification information. Therefore, we developed a novel multi-magnification feature self-enhancement network to extract magnification-specific features from multi-magnification images via self-supervised learning without extra supervision. Specifically, the proposed network enhances feature information via a super-resolution module based on self-supervision to transfer the representative information into a segmentation encoder, which takes full advantage of the pyramid storage characteristic of whole slide images. Moreover, a multi-scale feature fusion module based on the attention mechanism is used to fuse pretrained multi-scale features through a gated unit block, which is designed to fuse features from generation task into and segmentation task. We evaluated this method using the hepatocellular carcinoma data set from The Cancer Genome Atlas with cross-validation and achieved a 0.829 Dice similarity coefficient, nearly a 3% improvement over state-of-the-art models used for segmentation. To the best of our knowledge, this is the first study of the application of super-resolution as a self-enhancement network for histopathological image segmentation via self-supervised pretext learning. The code will be available at https://github.com/SH-Diao123/Self-Supervised-Multi-magnification-Segmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call