Abstract

In this work, we propose to address the existing problem of biomedical image segmentation that often produces results, which fail to capture the exact contours of the target and suffer from ambiguity. Most previous techniques are suboptimal because they often simply concatenate contour information to alleviate this problem, while ignoring the correlation between regions and contours. As a matter of fact, the relationship between cross-domain features is an important clue for ambiguous pixel segmentation in biomedical images. To this end, we contribute a simple yet effective framework called Contour-Guided Graph Reasoning Network (CGRNet) for more accurate segmentation against ambiguity, which is capable of capturing the semantic relations between object regions and contours through graph reasoning. Specifically, we first perform a global graph representation of the low-level and high-level features extracted by the feature extractor, where clusters of pixels with similar features are mapped to each vertex. Further, we explicitly combine contour information as the geometric prior, which can aggregate features of contour pixels to graph vertices and focus on features along the boundaries. Then, the cross-domain features propagate information through the vertices on the graph to efficiently learn and reason about the semantic relations. Finally, the learned refinement graph features are projected back to the original pixel coordinate space for the final pixel-wise segmentation task. Extensive experiments on the three publicly available Kvasir, CVC-612, and COVID19-100 datasets show the effectiveness of our CGRNet with superior performance to existing state-of-the-art methods. Our code is publicly available at: https://github.com/DLWK/CGRNet.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call