Abstract

Automatic gastric cancer segmentation is a challenging problem in digital pathology image analysis. Accurate segmentation of gastric cancer regions can efficiently facilitate clinical diagnosis and pathological research. Technically, this problem suffers from various sizes, vague boundaries, and the non-rigid characters of cancerous regions. For addressing these challenges, we use a deep learning based method and integrate several customized modules. Structurally, we replace the basic form of convolution with deformable and Atrous convolutions in specific layers, for adapting to the non-rigid characters and larger receptive field. We take advantage of the Atrous Spatial Pyramid Pooling module and encoder-decoder based semantic-level embedding networks for multi-scale segmentation. In addition, we propose a lightweight decoder to fuse the contexture information, and utilize the dense upsampling convolution for boundary refinement at the end of the decoder. Experimentally, sufficient comparative experiments are enforced on our own gastric cancer segmentation dataset, which is delicately annotated to pixel-level by medical specialists. The quantitative comparisons against several prior methods demonstrate the superiority of our approach. We achieve 91.60% for pixel-level accuracy and 82.65% for mean Intersection over Union.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call