Abstract

Medical segmentation is a task that pays attention to details. The rapid development of deep learning in image processing technology makes it possible to segment objects accurately on small datasets. In this paper, we propose a hierarchical multi-scale attention network that focuses on the fine-grained parts of the target. Our attention network consists of a hierarchical encoder module with dense connections, a multi-scale module attention to fine-grained parts, and a decoder module. We also combine the weighted cross-entropy loss function based on details and the Dice coefficient loss to increase the sensitivity of fine grains. To verify our module’s performance, we carried out a series of comparative experiments on the multi-scale attention module on the DRIVE dataset. We determine the best structure through experiments and compare it with several classical deep learning models. Our experiments show that extracting semantic information of images at an appropriate resolution can also improve the accuracy of detail segmentation. To show the generalization ability of the work, we conducted experiments on different DRIVE, STARE, and CHASE_DB 1 datasets, and our method achieved 0.8802/0.8464/0.8216 in sensitivity performance metric, 0.9756/0.9869/0.9784 in specificity, and 0.9675/0.9657/0.9637 in accuracy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.