Abstract

Accurate retinal vessel segmentation is a challenging problem in color fundus image analysis. An automatic retinal vessel segmentation system can effectively facilitate clinical diagnosis and ophthalmological research. In general, this problem suffers from various degrees of vessel thickness, perception of details, and contextual feature fusion in technique. For addressing these challenges, a deep learning based method has been proposed and several customized modules have been integrated into the well-known U-net with encoder–decoder architecture, which is widely employed in medical image segmentation. In the network structure, cascaded dilated convolutional modules have been integrated into the intermediate layers, for obtaining larger receptive field and generating denser encoded feature maps. Also, the advantages of the pyramid module with spatial continuity have been taken for multi-thickness perception, detail refinement, and contextual feature fusion. Additionally, the effectiveness of different normalization approaches has been discussed on different datasets with specific properties. Finally, sufficient comparative experiments have been enforced on three retinal vessel segmentation datasets, DRIVE, CHASE_DB1, and the STARE dataset with unhealthy samples. As a result, the proposed method outperforms the work of predecessors and achieves state-of-the-art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call