Abstract
BackgroundRetinal vessel segmentation provides an important basis for determining the geometric characteristics of retinal vessels and the diagnosis of related diseases. The retinal vessels are mainly composed of coarse vessels and fine vessels, and the vessels have the problem of uneven distribution of coarse and fine vessels. At present, the common retinal blood vessel segmentation network based on deep learning can easily extract coarse vessels, but it ignores the more difficult to extract fine vessels.MethodsScale-aware dense residual model, multi-output weighted loss and attention mechanism are proposed and incorporated into the U-shape network. The model is proposed to extract image features through residual module, and using a multi-scale feature aggregation method to extract the deep information of the network after the last encoder layer, and upsampling output at each decoder layer, compare the output results of each decoder layer with the ground truth separately to obtain multiple output losses, and the last layer of the decoder layers is used as the final prediction output.ResultThe proposed network is tested on DRIVE and STARE. The evaluation indicators used in this paper are dice, accuracy, mIoU and recall rate. On the DRIVE dataset, the four indicators are respectively 80.40%, 96.67%, 82.14% and 88.10%; on the STARE dataset, the four indicators are respectively 83.41%, 97.39%, 84.38% and 88.84%.ConclusionThe experiment result proves that the network in this paper has better performance, can extract more continuous fine vessels, and reduces the problem of missing segmentation and false segmentation to a certain extent.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.