Abstract

Deep learning based methods for retinal vessel segmentation are usually trained based on pixel-wise losses, which treat all vessel pixels with equal importance in pixel-to-pixel matching between a predicted probability map and the corresponding manually annotated segmentation. However, due to the highly imbalanced pixel ratio between thick and thin vessels in fundus images, a pixel-wise loss would limit deep learning models to learn features for accurate segmentation of thin vessels, which is an important task for clinical diagnosis of eye-related diseases. In this paper, we propose a new segment-level loss which emphasizes more on the thickness consistency of thin vessels in the training process. By jointly adopting both the segment-level and the pixel-wise losses, the importance between thick and thin vessels in the loss calculation would be more balanced. As a result, more effective features can be learned for vessel segmentation without increasing the overall model complexity. Experimental results on public data sets demonstrate that the model trained by the joint losses outperforms the current state-of-the-art methods in both separate-training and cross-training evaluations. Compared to the pixel-wise loss, utilizing the proposed joint-loss framework is able to learn more distinguishable features for vessel segmentation. In addition, the segment-level loss can bring consistent performance improvement for both deep and shallow network architectures. The findings from this study of using joint losses can be applied to other deep learning models for performance improvement without significantly changing the network architectures.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.