Abstract
Accurate segmentation of medical images is of great significance for computer-aided diagnosis. Transformers show great promise in medical image segmentation, where they can complement local convolutions by capturing long-range dependencies via self-attention. Recent methods have shown good performance in dealing with variations in global context modeling. However, they do not deal well with problems such as boundary blurring because they ignore the edge prior and the complementarity of the global context. To address this challenge, we propose a segmentation network based on informative priors across scales. The encoder in our network utilizes the self-attention mechanism to capture long-range dependencies, while the proposed cross-scale prior decoder makes full use of the multi-scale features in the hierarchical vision transformer to capture boundary information by using a prior perceptron, and enhances both remote and local context information by suppressing background information using a pattern perceptron. Through the internal organic combination, the edge prior and the global background are fully used to complement each other, and the problem of inaccurate boundary segmentation is better solved. Extensive experiments have been conducted on multiple segmented datasets to validate the advanced performance of the model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.