Abstract

Pathological diagnosis is the gold standard for disease assessment in clinical practice. It is conducted by inspecting the specimen at the microscopical level. Therefore, a very high-resolution pathological image that precisely describes the submicron-scale appearance is essential in the era of digital pathology, which is not easily obtained. Recently, pathological image super-resolution (SR) has shown promising prospects in bridging this gap. However, existing studies have not fully explored the peculiarity of pathological data, which contains several gradually enlarged images describing the specimen at different magnifications. In this paper, we propose a novel MMSRNet that formulates the pathological image SR in a multi-task learning way. It adds an image magnification classification branch on top of the CNN-based SR network, e.g., RCAN. Therefore, the learning objective is transformed into performing the SR while classifying the magnification as accurately as possible. The incorporated classification label guides the network to learn a more powerful feature representation. Meanwhile, the multi-task learning paradigm also encourages the joint learning of multi-scale mapping functions corresponding to multiple magnifications. It thus enables the learned model to adaptively accommodate the magnification variants, overcoming the problem that performing SR from different magnifications is treated as independent tasks in existing studies. Extensive experiments are conducted to validate the effectiveness of MMSRNet. It not only gains better performance in performing SR across magnifications and scaling factors, but also exhibits attractive plug-and-play nature when RCAN is substituted by other SR networks. The generated images are also supposed to be helpful in clinical diagnosis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call