Abstract

Accurate retinal fundus vessel segmentation can contribute to precisely diagnosing diseases that cause retinal vascular structural changes. At present, retinal vessels are usually segmented manually in most hospitals. However, manual segmentation is time consuming and labor intensive. Moreover, due to the complex morphology of blood vessels, it is still a challenging task for computer automatic segmentation methods to achieve accurate segmentation. To address these problems, a multi-scale dense network (MD-Net) that can make full use of multi-scale information and encoder features is proposed in this paper. In this work, residual atrous spatial pyramid pooling (Res-ASPP) modules are embedded in the encoder to extract multi-scale information of blood vessels with improved information flow. Furthermore, a dense multi-level fusion mechanism is proposed to densely merge the multi-level features in the encoder and the decoder so that the feature losses are minimized. In addition, squeeze-and-excitation (SE) blocks are applied in the concatenation layers to emphasize effective feature channels. The network is evaluated on the DRIVE, STARE and CHASE_DB1 databases. The accuracy, dice similarity coefficient (DSC), sensitivity and specificity of MD-Net on these three databases are 0.9676/0.8099/0.8065/0.9826, 0.9732/0.8411/0.8290/0.9866 and 0.9731/0.7877/0.7504/0.9889, respectively. In addition, the overall performance of MD-Net outperforms other current state-of-the-art vessel segmentation methods, which indicates that the proposed network is more suitable for retinal blood vessel segmentation, and is of great clinical significance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.